How to Monitor and Analyze AWS Managed Microsoft AD Security Logs Using Amazon CloudWatch and Splun

How to Monitor and Analyze AWS Managed Microsoft AD Security Logs Using Amazon CloudWatch and Splunk

 

https://aws.amazon.com/blogs/apn/how-to-monitor-and-analyze-aws-managed-microsoft-ad-security-logs-using-amazon-cloudwatch-and-splunk/

 

AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) makes it possible for you to monitor and analyze security events of your directory in near real-time.

You can now forward security event logs from your directory to Amazon CloudWatch Logs in the Amazon Web Services (AWS) account of your choice, and centrally monitor events using AWS services or third-party applications such as Splunk, an AWS Partner Network (APN) Advanced Technology Partner with the AWS Security Competency.

In this post, I will show you an example of how to detect and respond to unauthorized or unusual activity. For example, account lockouts may result from a user who forgot their password. However, a bad actor could be attempting unauthorized access, or running a denial of service attack against your users.

By detecting account lockouts, you may be able to distinguish between an attacker and a user who innocently lost access and an attacker, and you can respond appropriately.

I will also explore how to monitor and create near-real-time alerts for account lockouts in your AWS Managed Microsoft AD using Amazon CloudWatch Logs and Splunk. I’ll accomplish this in four steps:

  1. Enable log forwarding to Amazon CloudWatch Logs.
  2. Configure your Splunk environment.
  3. Stream logs from Amazon CloudWatch Logs to Splunk using an AWS Lambda function.
  4. Configure the monitor account lockouts dashboard.

Assumptions and Solution Architecture

For the purposes of this post, I am assuming you already created an AWS Managed Microsoft AD directory and configured a fine-grained password policy that enforces the account lockout policy (not enabled by default).

In this example, I configured a password policy with a lockout policy after three failed login attempts. I’ve assumed you are already using Splunk Cloud, which is a cloud-native approach to monitor cloud services.

If you don’t have one already, sign up here and verify your email. This takes you to a login page where you can spin up your Splunk Cloud within minutes.

Splunk Security Logs-1

As you can see, I’ve enabled AWS Managed Microsoft AD log forwarding to Amazon CloudWatch Logs, configured Splunk, used an AWS Lambda function to push the event logs from Amazon CloudWatch Logs to Splunk, and then configured the Splunk dashboard to monitor account lockouts.

You can use Amazon Kinesis Data Firehose as an alternative to an AWS Lambda function. In this post, I’ll use Splunk Cloud instead of Splunk Enterprise, because it eliminates the need of infrastructure deployment and management.

Step 1: Enable Log Forwarding to Amazon CloudWatch Logs

Follow these steps to enable log forwarding from your directory to Amazon CloudWatch Logs:

  • Open the AWS Management Console, select Directory Service, and then select the directory you want to share (in my case, corp.com).
  • In the details page, select the Networking & Security tab, and then choose Enable under the Log Forwardingsection.

Splunk Security Logs-2

  • Create or select an existing CloudWatch Log group that will contain the security logs from your domain controllers. If you have a central security team that monitors your cloud activity from a separate central account, you can send the security logs to their Amazon CloudWatch account.
    .
    In this example, I’ll create a new log group in the same account as the directory. Select the Create a New Log Group option and use the suggested log group name. Choose Enable, and then wait 5-10 minutes for the security logs of each domain controller to be available in Amazon CloudWatch Logs.
    .
    Note that AWS Directory Service will create or use an existing resource policy with permissions to publish the security logs to the specified log group name.

Splunk Security Logs-3

Step 2: Configure Your Splunk Environment

As I mentioned in the solution architecture overview, I am using an AWS Lambda function to push event logs from Amazon CloudWatch Logs to Splunk. To receive the event logs into Splunk, I must first configure a Splunk HTTP Event Collector (HEC) by following these steps:

  • Open the Splunk management console, select Settings, then Data Inputs, and choose Add New HTTP Event Collector. Here’s a list of properties you must configure:

Splunk Security Logs-4

Below is my configuration example:

Splunk Security Logs-5

  • Enable HEC through the Global Settings dialog box. On the Data Inputs page, select HTTP Event Collector and choose Global Settings. Select Enable in the All Tokens option.

Splunk Security Logs-6

Step 3: Stream Logs from Amazon CloudWatch Logs to Splunk

Now that I’ve enabled log forwarding to Amazon CloudWatch Logs and configured Splunk, I’ll create an AWS Lambda function to stream logs from CloudWatch Logs to Splunk. To accomplish this, I will use a predefined Splunk CloudWatch log-processing blueprint in Lambda by following these steps:

  • Open the AWS Management Console, select Lambda, and then choose Create Function. Select the Blueprintsoption, and search for “splunk.” Select the “splunk-cloudwatch-logs-processor” Lambda blueprint and choose Configure.

Splunk Security Logs-7

  • In the Basic Information section, provide a Name for your Lambda function and create or select an Identity and Access Management (IAM) Role that grants Lambda the rights to CreateLogGroup, CreateLogStream, and PutLogEvents.
    .
    In this example, I created a new Role from the template shown below. The Lambda function will attach an AWSLambdaBasicExecutionRole which has the permissions listed above.

Splunk Security Logs-8

  • In the CloudWatch Logs Trigger section, select the Log Group to which you are forwarding AWS Managed Microsoft AD security logs (see Step 1). Provide a Filter Name, and make sure you check the Enable Trigger option.

Splunk Security Logs-9

  • In the Environment Variables section, provide the values for the following variables according to your Splunk configurations in Step 2:
    .

Splunk Security Logs-10

  • Choose Creation Function, and AWS subscribes this Lambda Function to the selected log group. With this, Amazon CloudWatch Logs triggers the subscribed Lambda function each time CloudWatch receives a new security event from AWS Managed Microsoft AD.

Splunk Security Logs-11

  • After a few minutes, you’ll see your directory security events in your Splunk environment. To see the events from your Splunk dashboard, click on Search & Reporting and query using command index=main, and choose the appropriate values from Selected Fields in the left pane. This will auto-populate the search query as index=main host=”input-prd-p-69vgmjstn6rc.cloud.splunk.com:8088″ source=”lambda:DSSecurityLogs.”

Splunk Security Logs-12

Step 4: Configure the Monitor Account Lockouts Dashboard

Now that Splunk is receiving security events, I am ready to create a dashboard in Splunk where I can monitor the account lockouts of the directory. Active Directory generates the Event ID 4740 every time an account lockout occurs. To monitor this specific event, I need to install the Splunk add-on for Microsoft Windows, which enables Splunk to understand and parse Windows logs.

From your Splunk dashboard, click on Find More Apps and search for “Splunk Add-on for Microsoft Windows.”

Splunk Security Logs-13

The Splunk for Microsoft Windows add-on provides common information model mappings for Windows events, and allows you to set up dashboard and alerting that I’ll configure in next steps. Click Install besides the Add-On.

Splunk Security Logs-14

Splunk can now process the log files as Windows security events. Next, I will use Splunk searches to configure a dashboard report that shows details of account lockouts:

  • Create a query to search for the account lockout events (Event ID 4740). Here’s an example of the query: sourcetype=xmlwineventlog EventCode=4740 | table _time TargetUserName, TargetDomainName | rename TargetDomainName as “Caller Computer Name”

Splunk Security Logs-15

  • Save the query by selecting the option Save As Dashboard Panel and provide the requested information. See here for more details on creating Dashboards.

Splunk Security Logs-16

  • You can now see the account lockout events in your Splunk dashboard report.

Splunk Security Logs-17.2

Congratulations! You can now monitor your AWS Managed Microsoft AD security event logs using Splunk in near real-time. Splunk provides additional monitoring and alerting capabilities, such as sending an alert email every time an account lockout occurs.

Summary

In this post, I have demonstrated how you can monitor your AWS Managed Microsoft AD directory’s security events using Amazon CloudWatch Logs and Splunk in near real-time.

I used the account lockout event as an example that helps you to be informed and take appropriate actions even before your end users reach out to you. In addition, I showed you how to accomplish this by using cloud-based services only. This makes it easier and more cost-effective for you to monitor your directory security events because it eliminates the need to deploy any additional infrastructure or deploy and manage additional monitoring tools.

Advertisements

Incident Response Plan

Incident Response Plan

  1. Readiness and Detection Review
  2. BIA (Impact Analysis)
  3. Computer Security Incident Response Plan (CSIRP) Development
  4. Computer Incident Response Team (CIRT) Development
  5. First Incident Responder Training
  6. Table Top Exercises
  7. Attack Simulation & Response Exercise (Red / Purple Team)
  8. Annual Review
  9. DFIR
  10. SOAR

Reference

AI Myth

AI Myth

The promise of organizations gaining the ability to predict future attacks is a marketing smokescreen hiding real progress in advanced diagnostic analytics and in risk scoring. To be clear, AI has potential to enhance the effectiveness of a security team. In the area of anomaly detection and security analytics, humans working with AI accomplish much more than without it. However, it is far more realistic to strive for “smart automation,” executing tasks with trained professional human effort that is complemented by automation rather than by fully automated AI features.”

Gartner Report: 5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

How to weigh the potential benefits and risks of machine learning

5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

Published 29 August 2018 – ID G00350259 – 23 min read


Mentions of artificial intelligence in Gartner security inquiries rose 76% in the past 12 months. Security and risk management leaders and chief information security officers in early-adopting organizations must articulate the potential benefits and risks of machine learning techniques.

Overview

Key Findings

  • The most frequent question about artificial intelligence in security inquiries is, “What is the state of AI in security?” SRM leaders are tempted to believe bold promises about its benefits but are often ill-prepared to evaluate the impact of AI on their mission.
  • A sample review of Leaders from 11 Gartner Magic Quadrants shows that more than 80% of these security vendors include AI in their marketing message. Machine learning and deep neural networks are the most frequently cited techniques.
  • When considering the use of AI, CISOs need to identify whether they have to fight “resistance to change” or “fear of missing out” biases in order to rebalance their AI adoption strategy.
  • Today’s use of AI in security addresses use cases that other techniques can address. Most organizations lack evaluation frameworks and metrics to benchmark these new techniques against the older ones they already paid for.
  • Machine learning is also vulnerable to attacks, highlighting that AI is no silver bullet.

Recommendations

SRM leaders and CISOs building information security management strategy should:
  • Focus on the desired outcome. Define evaluation metrics to measure the quality of AI results in order to help free purchasing decisions from the confusion of marketing hype. New technical approaches in your defense portfolio must achieve measurable results.
  • Assess the impact of using machine learning on staff and data privacy. Identify skills gaps and required training. Classify relevant regulatory requirements based on how AI might impact them.
  • Utilize AI as a complementary technique, beginning with experimental engagements. The low maturity of AI does not preclude utilization, but recency of implementations invites caution.
  • Discourage DIY approaches on AI. The maturity of AI techniques has not yet grown past the Peak of Inflated Expectations.

Strategic Planning Assumptions

By 2022, replacement of conventional security approaches by machine learning technologies will actually make organization less secure for a third of enterprises.
By 2021, 60% of security vendors will claim AI-driven capabilities, more than half of them undeservedly, to justify premium prices.
In 2020, AI will become a positive net job motivator, creating 2.3 million jobs while eliminating only 1.8 million jobs.

Analysis

Despite existing definitions, “artificial intelligence” is often used as a relative term. Marketers and journalists use the term loosely to describe a broad range of functions. The hype has veiled artificial intelligence in a fog of exaggerated expectations and vague details — even fear: The thought of AI taking over our jobs often generates anxiety. Yet within context, an SRM leader can appreciate AI for how it can help achieve better security outcomes.
The promise of organizations gaining the ability to predict future attacks is a marketing smokescreen hiding real progress in advanced diagnostic analytics and in risk scoring. To be clear, AI has potential to enhance the effectiveness of a security team. In the area of anomaly detection and security analytics, humans working with AI accomplish much more than without it. However, it is far more realistic to strive for “smart automation,” executing tasks with trained professional human effort that is complemented by automation rather than by fully automated AI features.
It is a serious responsibility for SRM leaders and CISOs to determine if, and how, an emerging technology might benefit their organization. In theory, it is not the provider’s choice to make this determination or to shape it. The benefits are something that the technology should prove by itself.
Developing the right strategic approach to effectively incorporate AI in SRM programs requires preliminary work. Security leaders must answer these three questions before reaching out to technology providers (see Figure 1):
  1. Do you need artificial intelligence for the problem you are trying to solve?
  2. How can you measure that it is worth the investment?
  3. What will be the impact of using AI for this?

Figure 1. Three Preliminary Questions to Ask About Artificial Intelligence

Source: Gartner (August 2018)

350259_0001

Three Preliminary Questions to Ask About Artificial Intelligence
This research examines the questions SRM leaders most frequently ask Gartner analysts relating to AI implementations in their practices. With sufficient understanding of the capabilities of AI — and the best ways to determine how AI can serve the organization — leaders can utilize these promising technologies with reasonable expectations.

What Should CISOs and Their Team Know About Artificial Intelligence?

“Artificial Intelligence Primer for 2018,” Gartner analysis that lists upcoming research on the topic, includes the following definition of artificial intelligence:
Artificial intelligence refers to systems that change behaviors without being explicitly programmed based on data collected, usage analysis and other observations. These systems learn to identify and classify input patterns, probabilistically predict and operate unsupervised.
Using artificial intelligence simply means using algorithms to classify information and predict outcomes faster — and in greater volume — than humans can.
AI implementations today are “special-purposed AI” that are limited to specific narrow use cases. This is the case for any AI claim in security today. “Artificial General Intelligence” (AGI) designates a general-purpose AI that does not exist yet (see “Hype Cycle for Artificial Intelligence”).

Artificial Intelligence Is in the Eye of the Beholder

CISOs should remember that — as with any popular term — there are many interpretations of “artificial intelligence.” Service providers might use artificial intelligence to describe the most basic feature, leveraging static artificial intelligence, such as a knowledge base of attack techniques, whereas using the term “artificial intelligence” sets much higher expectations for prospective customers.
To find the appropriate AI solution for the enterprise’s needs, SRM leaders must correctly translate the marketing hype that has heralded AI in misleading terms. Several common marketing buzzwords have turned out to have much less dramatic but nonetheless useful meanings (see Table 1).

Table 1: Navigating AI Buzzwords

Enlarge Table
Term used as a buzzword
Most likely meaning
“Next generation”
“Our latest release”
Holistic approach
Multifunction
Artificial intelligence
Algorithms
Machine learning
Algorithms processing large amounts of data
Deep learning (deep neural network)
Multistep machine learning
Predictive
Diagnostic
Source: Gartner (August 2018)

Understanding the Basics of AI

More advanced algorithms can immediately reveal crucial information about vulnerabilities, attacks, threats, incidents and responses. Engaging the proper AI function could result in a noticeable augmentation of a security team’s capabilities (see Figure 2).

Figure 2. High-Level Descriptions of AI Concepts and Security Use Cases

Source: Gartner (August 2018)

350259_0002

High-Level Descriptions of AI Concepts and Security Use Cases
The most frequent concept CISOs will encounter when discussing artificial intelligence are probabilistic reasoning (often generalized as “machine learning”) and computational logic (often referred through rule-based systems). (See “Artificial Intelligence Hype: Managing Business Leadership Expectations” for a more detailed explanation.)
Machine learning is a data science technical discipline that automatically extracts patterns (knowledge) from data. At a high level, machine learning:
  1. Extracts target “features” from data (e.g., email metadata)
  2. Automatically trains a decision logic, called a “model,” with data (e.g., Bayesian networks)
  3. Applies the model on a given input (e.g., emails, files) to determine and estimate output (classify an email as spam or find malware)
There are three major subdisciplines of machine learning, which relate to the types of observation provided:
  • Supervised learning: Using precategorized data as a training set for the model (e.g., a dataset of known good emails and known spams)
  • Unsupervised learning: Using unlabeled data to train the model (e.g., network traffic)
  • Reinforcement learning: Using the results of an action to adjust behavior (subsequent actions) in similar circumstances (e.g., behavior anomaly detection)
A deep neural network, or deep learning, expands machine learning by discovering intermediate representations. This allows SRM leaders to tackle more complex problems and to solve problems with higher accuracy (see “Innovation Insight for Deep Learning”). A typical example of using deep learning is image processing. In security, some complex problems, such as building risk scores from various sources and analyzing network traffic, might benefit from using this approach.
Machine learning and deep neural network implementations are opaque. Even with knowledge of the mathematics behind these concepts, it is difficult for the user to identify the source data and rationale behind the output.
Gartner has published extensive research on the various uses of AI under the key initiative  Artificial Intelligence. A sampling of research documents (see list below) will let you learn more about artificial intelligence.
  • “Hype Cycle for Artificial Intelligence”
  • “Artificial Intelligence Hype: Managing Business Leadership Expectations”
  • “Machine Learning: FAQ From Clients”
  • “Innovation Insight for Deep Learning”
Recommendations:
  • Learn that machine learning and deep learning behave as black box algorithms with your organization’s data. This will have implications not only on data privacy but also on tool evaluation, as “production proof of concept” might neither prove easily feasible nor show future performance.
  • As staff from various teams learn about AI, enforce knowledge sharing and build an internal knowledge base on the topic.
  • Acknowledge AI hype; hear “algorithm” when vendors or markets say “artificial intelligence,” “machine learning” or any other related buzzword.
  • Develop a baseline understanding of AI concepts for members of the security team who might deal with the related technologies to optimize costs and avoid unneeded purchases.

What Is AI’s Expected Impact on SRM?

Artificial intelligence’s promise is to automatically process data and apply analytics functions much better than human teams can without aid. Improved automation and data analytics apply to security analytics — such as SIEM, network traffic analysis, user and entity behavior analytics — and infrastructure protection — endpoint protection, web application firewall, bot mitigation, cloud access security brokers. AI solutions promise to offer improved efficiency and speed to find more attacks, reduce false alerts and perform faster detect-and-respond functions.
The use of AI is also visible in integrated risk management, where its promise is to better support risk-based decision making by identifying and prioritizing risk.
Machine learning is already pervasive in many security markets. It might incrementally improve the benefits of existing technologies when implemented as a feature and could also answer new needs when machine learning is at the core of a new product (see Figure 3).

Figure 3. What Should You Expect From Using Machine Learning?

Source: Gartner (August 2018)

350259_0003.png

What Should You Expect From Using Machine Learning?
To engage AI with reasonable expectations of an improved SRM practice, CISOs will wish to gain at least a minimum understanding of requirements. They should prepare to be the leading agent of the company’s security use of AI. They should become familiar with the capabilities of AI-based systems to assess the potential of these systems for greater efficiency and effectiveness.
More importantly, SRM leaders should be the voice of reason and clarity, setting suitable expectations for stakeholders and employees around the reality of AI versus its exaggerations. Staff and colleagues must also follow their lead and concentrate their energies on several crucial actions in a security practice. These understandings are described in Figure 4.

Figure 4. Security Roles and Required Understandings in AI Engagements

Source: Gartner (August 2018)

350259_0004

Security Roles and Required Understandings in AI Engagements
Because many algorithms must consume large amounts of data, CISOs should engage early with privacy leaders to understand the implications of using an AI product or feature on data security and privacy.
Technical advisors and security operations need a more in-depth understanding of AI technology. They should start by defining the right evaluation metrics to assess the efficacy of the new techniques and to avoid being influenced by the coolness factor of a new shiny tool. SRM leaders should appreciate that lack of transparency and trouble with evaluation of tool effectiveness are the key problems.
There is no assurance that machine-language-driven results are better than alternate techniques. Available feedback is still scarce. When evaluating solutions that have AI claims, security leaders should focus on the outcome, not the technique. When adding a new technical approach to your defense portfolio, it needs to achieve measurable results.
Machine learning is fallible. It can offer incorrect or incomplete conclusions when using insufficient data, domain insights or compute infrastructure. There may be no mathematical solution designed for the organization’s specific needs. In many areas of security where AI is unproved, including anomaly detection, the efficacy of machine learning is difficult to benchmark. Part of the reason for that is the lack of reliable security metrics.
Machine learning is also vulnerable to attacks. Attacks on classification algorithms introduce just enough noise to change the diagnostic.1 As they’ve done every time, attackers will adapt to the new defense techniques and might also already be leveraging AI to improve their attacks.
Recommendations:
  • Inventory the areas where AI techniques are already available to improve existing solutions. Test if they actually improve upon them, as anticipated.
  • Identify new categories of solutions leveraging AI techniques that could help fill a gap in your security posture.
  • Define AI-related roles, responsibilities and required understanding in the security team (see Figure 4).
  • Focus everyone’s attention on the security outcome, not the availability of an AI technique. This requires investing resources and time on heavy testing before scaling the use of an AI technique.

What Is the State of Artificial Intelligence in Security?

As shown in the “Hype Cycle for Artificial Intelligence, 2017,” the most useful techniques for security (machine learning and deep learning) are at the Peak of Inflated Expectations. This suggests that early adopters will undergo a period of experimentation before optimum results will be achieved.
This low maturity of AI in general is one of the reasons why it is probably a bad idea for security organizations to try an autonomous DIY — build your own AI — approach to implement AI for security objectives. SRM leaders should recognize that exchanges of knowledge with other teams in the organization or within vertical industries may be required to hasten payback first. Specialized resources are scarce, and the AI tools and framework are not fully mature yet.
Similarly, most technology providers’ AI initiatives related to SRM are immature. Even when excluding false claims, the solutions and engagements in AI from security vendors are recent. This is apparent in the form of a lot of “AI version 1.0” implementations in many security products. These implementations might also rely on third-party AI frameworks.
Gartner estimates that many of today’s AI implementations would not pass due diligence testing in proving that they achieve significantly better results than other existing techniques.
Some vendors rebrand statistical analysis with a new name. For example, for a long time web application firewalls have used statistical approaches to provide automated application pattern learning, which is now called AI. These AI maturity levels, ranging from immature to experimental, do not preclude utilization. Security leaders should treat AI as emerging technologies, adding them as experimental, complementary controls.
In the Enterprise survey, conducted in March 2018, “cybersecurity” comes as the most frequent, most critical and most prominent AI project within organizations (see Figure 5). Gartner predicts that, by 2021, 50% of enterprises will have added unsupervised machine learning to their fraud detection solution suites (see “Begin Investing Now in Enhanced Machine-Learning Capabilities for Fraud Detection”).

Figure 5. Cybersecurity Leads AI Use in Enterprises

Source: Gartner (August 2018)

350259_0005
Cybersecurity Leads AI Use in Enterprises
SRM leaders may find that machine learning techniques might have better results compared to signature-based controls or other heuristics, but they must build a system to measure and compare results between various techniques used for the same purpose.

The Promise of Predictive Security Analytics

Predictive analytics is one of the four categories of analytics (descriptive, diagnostic and prescriptive are the others). It is also an area where the promise and expectations largely exceed the current state of AI. A solution in this category is designed to answer the question, “What will happen?”
An example of true prediction can be found in nonsecurity areas, such as predictive maintenance. A security example might forecast a company’s likelihood of compromise. For example, “We estimate that you have 90% chance of being breached by malware within two weeks. This likelihood is because you have servers publicly exposed and we are seeing active infection in organizations from the same vertical in your region.”
In “Combine Predictive and Prescriptive Analytics to Drive High-Impact Decisions,” Gartner offered a reasonable explanation of what predictive analytics addresses:
Predictive analytics addresses the question, “What is likely to happen?” It relies on such techniques as predictive modeling, regression analysis, forecasting, multivariate statistics and pattern matching.
“What is likely to happen?” can be answered with a probability, which is admittedly less appealing than a true prediction of the future. Arguably, a risk score is closer to advanced diagnostics than it is to a prediction. Still, many security analytics providers actively leverage the “predictive analytics” message to present their scoring mechanisms.
Recommendations:
  • Rationalize any temptation to do-it-yourself AI for security.
  • Treat security products and features leveraging AI as emerging yet unproven technologies.
  • Beware before giving any security budget to self-proclaimed seers.

What Should You Ask Security Vendors About AI?

Evaluating emerging technologies is hard because there is no feedback, no template for building requirements and a lot of uncertainties around long-term value. CISOs should ensure that their team avoids evaluation biases:
When considering the use of artificial intelligence, CISOs need to identify whether they have to fight “resistance to change” or “fear of missing out” biases.
One important consideration for leaders to note is that adding artificial intelligence as a feature of an existing product might look less impactful than adding a new product but in reality it’s not that simple. The simple feature might “call home” and leak data to the vendor’s cloud infrastructure, which could break some privacy commitments. A stand-alone appliance product with local machine learning at its core, deployed in detection mode only, wouldn’t be that risky to evaluate.
When algorithms are applied locally, and there is no data sharing, or when these risks have been handled appropriately, testing an AI feature is easy. Adding a new algorithm as a feature of an existing platform is another way to test how the new feature can augment technology.
The section below includes some of the content from “Questions to Ask Vendors That Say They Have ‘Artificial Intelligence.’” In the context of using AI for security, we can group them in relevant categories and add some specific questions.
Artificial Intelligence Mechanics
  • What algorithms does the product use to analyze data?
  • Which analytics methods (ML and others) contribute to AI functionality?
  • How do you upgrade the AI engine (e.g., the model) once deployed in production?
  • For each analytic method mentioned above, please indicate the alternate technical approaches used to solve the same problem (on your product or on competitors’ products).
Privacy
  • How can we see what happens with data that is related to my project?
  • What data and compute requirements will you need to build the models for the solution?
  • Does your product send anything outside of our organization once deployed (“call home”)?
    • If yes, please describe (and provide samples) of what the product sends.
    • If yes, please describe configuration options available to control/disable the feature.
    • If yes, please describe the security measure you (the vendor) deploy to ensure the security of your customers information.
  • How can we view/control data used by the solution?
  • Can we wipe data in specific situations — e.g., data related to a departing employee?
Security Benefits
  • What are the security and performance metrics relevant to measure the results from AI?
  • Could you provide third-party resources, such as independent testing reports, on your AI technology?
  • Can you provide peer references with a similar use case to ours?
  • What resources are available to gather and refine data that the AI solution can use so that its outcomes improve?
Process and People
  • How does your solution integrate in our enterprise workflow (e.g., incident response, ticketing)?
  • Does your solution integrate with third-party security solutions?
  • How should I expect to devote staff and time to tune and maintain the solution?
  • Please list available training courses for personnel operating the solution.
  • Please describe available reports security operations can use to communicate about the solution.
Any use of metrics for security might be helpful in determining the potential performance benefits of a solution. However, leaders should use rationalized metrics that are specific to their organization’s operational expectations.
Measuring AI success is a challenge. SRM leaders and CISOs should focus on measuring the potential benefits that any solution offers against an identified threat vector rather than on whether it uses AI or not.
For some of these technologies, a competitive Proof of Concept process will reveal to the selecting organization the realities of the AI capability offered by the vendor while shortening the evaluation period.
Recommendations:
  • Use and fine-tune the above list of questions when surveying vendors with claims of AI use.
  • Don’t create specific metrics for AI but evaluate the outcome against the same metrics that you use for other techniques.
  • Test, test, test. The novelty of these techniques implies a lack of independent testing and difficulties to get peer references with long-enough experience on the product.

Should You Fire Your Security Team (and Find a New Job)?

According to Gartner’s CIO survey, a third of organizations do not have a full-time security specialist. Over 25% of respondents indicate that they need employees with digital security skills. The widespread and ongoing shortage of qualified security professionals means that these skills are hard to find, if not impossible. These skills shortages leave security teams without sufficient numbers to complete SRM’s complex mission.
However, this doesn’t mean that technologies and new techniques, such as AI, are solely the answer to this challenge. With the development of new security roles, and the commensurate acquisition of new security competencies and skills, organizations can manage risk from digital business initiatives by assigning the right people with the right skills and competencies to the right roles.
Does this mean that CIOs should fire the CISO and the security team? No. Security and risk leaders must shift their view from hiring only to optimizing their security function.
Indeed, some functions will be automated, such as log management. Others may be replaced or augmented by machine learning capabilities (see the “Market Guide for Endpoint Detection and Response Solutions”), but that does not mean that leaders are in any way, shape or form ready to disband their security team. AI is a capability that will enable tools to become more powerful and efficient. However, those tools will only be as good as the practitioners using them — the corollary to “it’s the poor carpenter who blames his tools.”
Contrary to any warnings that AI’s ascendance means the disappearance of the human worker, Gartner predicts that, by 2020, AI will create more jobs (2.3 million) than it eliminates (1.8 million). Upkeeping and tuning AI implementation may mean new hires with rare and expensive skills in the future.
SRM leaders have already gone through several transformations in recent years. Adding AI to enterprise’s security portfolio has some similarities with these transformations that CISOs have already gone through (see Table 2).

Table 2: Using Artificial Intelligence Compared to Recent Trends

Enlarge Table
Recent trend
Similarities with using artificial intelligence
Cloud-based security
Trusting third-party provider to process your data
“Black box” technology and little configurability at start
Next-generation firewall
Shift in expected results for existing technologies
Network sandboxing
Incontrollable hype on an emerging market
Source: Gartner (August 2018)
SRM leaders should have learned from the previous big technology shifts that hiring an “expert” rarely works for emerging areas because the required skills are not available yet.
Artificial intelligence pioneers have learned to aim for fairly “soft” outcomes when they engage AI. These leaders focus on worker augmentation, not on worker replacement.
The skills required to embrace emerging technologies change faster than most organizations can adapt. More importantly, these skills will open up the security industry to new roles such as “data security scientist” and “threat hunters” (see “New Security Roles Emerge as Digital Ecosystems Take Over”). The data security scientist role, for example, incorporates data science and analytics into security functions and applications. Specifically, this role determines how machine learning knowledge-based artificial intelligence can be deployed to automate tasks and orchestrate security functions by using algorithms and mathematical models to reduce risk.
To sharpen the specific expertise of your staff to incorporate those new roles, consider a variety of skills development exercises and development platforms. Talent-development platforms include universities, institutions, SANS training courses, conferences and table-top exercises. At the most mature point, consider the use of a cyber range (see “Boost Resilience and Deliver Digital Dexterity With Cyber Ranges”).
Recommendations:
SRM leaders overseeing information security management should:
  • Shift focus from hiring and buying to optimize existing security programs.
  • Consider artificial intelligence as a capability with the potential to improve the effectiveness and efficiency of the tools at the disposal of SRM practitioners. It is not a magical replacement for them.
  • Prioritize experimentations so AI engagements are directed at challenges in which you lack the resources or corporate worker base to succeed.
  • Build a decision framework for incorporating AI that favors fair evaluation and handles privacy impacts.
  • Use AI as an opportunity to strengthen or build your operational security metrics and RFP requirements.
  • Optimize your security function by getting rid of manual processes and develop your staff’s expertise; do not remove your security team.

Evidence

The 2018 Gartner CIO Survey was conducted online from 20 April 2017 to 26 June 2017 among Gartner Executive Programs members and other CIOs. Qualified respondents were the seniormost IT leader (CIO) for their overall organization or a part of their organization (for example, a business unit or region). The total sample is 3,160, with representation from all geographies and industry sectors (public and private). The survey was developed collaboratively by a team of Gartner analysts and was reviewed, tested and administered by Gartner’s Research Data and Analytics team.
Gartner’s Security and Risk Survey was conducted between 24 February 2017 and 22 March 2017 to better understand how risk management planning, operations, budgeting and buying are performed, especially in the following areas:
  • Risk and security management
  • Security technologies and identity and access management (IAM)
  • Business continuity management
  • Security compliance and audit management
  • Privacy
The research was conducted online among 712 respondents in five countries: U.S. (n = 141), Brazil (n = 143), Germany (n = 140), U.K. (n = 144) and India (n = 144).
Qualifying organizations have at least 100 employees and $50 million in total annual revenue for FY16. All industry segments qualified, with the exception of IT services and software and IT hardware manufacturing.
Further, each of the five technology-focused sections of the questionnaire required the respondents to have at least some involvement or familiarity with one of the technology domains we explored.
Interviews were conducted online and in a native language and averaged 19 minutes. The sample universe was drawn from external panels of IT and business professionals. The survey was developed collaboratively by a team of Gartner analysts who follow these IT markets and was reviewed, tested and administered by Gartner’s Research Data and Analytics team.
Disclaimer: “Total” results do not represent “global” findings and are a simple average of results for the targeted countries, industries and company size segments in this survey.

How to Report a SMS Phishing Scams

How to Report a SMS Phishing Scams

Example SMS Scam Message;

  1. Look up the IP address using Whois
  2. Identify the hosting service
  3. Identify Phone number from Message
  4. Submit a Abuse report to hosting services and upload images and information.
  5. Report to Medicare or the company involved.
  6. Report to https://www.scamwatch.gov.au/report-a-scam
  7. Submit IP to – https://www.abuseipdb.com or https://www.spamcop.net/anonsignup.shtml
  8. https://www.esafety.gov.au/women/lifestyle/shopping-and-banking/scams

64f8b135-ab6c-4dea-856f-86c8e1392281-original (1)

Medicare Rebate System 2018-11-27 12-53-51Security error 2018-11-29 15-54-48

 

Innovation Insight for Security Orchestration, Automation and Response

Innovation Insight for Security Orchestration, Automation and Response

Published 30 November 2017 – ID G00338719 – 24 min read


Enterprises are striving to keep up with the current threat landscape with too many manual processes, while struggling with a lack of resources, skills and budgets. Security and risk management leaders should determine which SOAR tools improve security operations efficiency, quality and efficacy.

Overview

Key Findings

  • Security operations teams struggle to keep up with the deluge of security alerts from an increasing arsenal of threat detection technologies.
  • Security operations still primarily rely on manually created and maintained, document-based procedures for operations, which leads to issues such as longer analyst onboarding times, stale procedures, tribal knowledge and inconsistencies in executing operational functions.
  • The challenges from an increasingly hostile threat landscape, combined with a lack of people, expertise and budget are driving organizations toward security orchestration, automation and response (SOAR) technologies.
  • Threat intelligence management capabilities are starting to merge with orchestration, automation and response tools to provide a single operational tool for security operation teams.

Recommendations

IT security and risk management leaders responsible for security monitoring and operations should:
  • Assess how SOAR tools can improve the efficacy, efficiency and consistency of their security operations by using orchestration and automation of threat intelligence management, security event monitoring and incident response processes.
  • Focus on automating tasks and orchestrate incident response starting with procedures that are easy to implement and where machine-based automation will reduce incident investigation cycle times.
  • Use external threat intelligence as a key way to improve the efficacy of security technologies and processes within the security operations program.

Strategic Planning Assumption

By year-end 2020, 15% of organizations with a security team larger than five people will leverage SOAR tools for orchestration and automation reasons, up from less than 1% today.

Analysis

Security and risk management leaders responsible for security monitoring and operations face an increasingly challenging world. Attackers are improving their ability to bypass traditional blocking and prevention security technologies, and end users continue to fall victim to attackers through social engineering methods, while still failing to carry out basic security practices well. While mean time to detect threats may be trending down across industries,1 it still takes way too long. Once detected, the ability to respond to, and remediate, those threats is still a challenge for most organizations. Additionally, many security teams have overinvested in a plethora of tools. As a result, they are also suffering from alert fatigue and multiple console complexity and facing the challenges in recruiting and retaining security operations analysts with the right set of skills and expertise to effectively use all those tools. This is all playing against a backdrop a growing attack surface that is no longer restricted to on-premises IT environments.
The attack surface today encompasses multiple forms of cloud (SaaS, IaaS and PaaS) and mobile environments, and even extends to third-party organizations that are suppliers to upstream organizations. Finally, effective security monitoring requires not only tools and well-documented incident response processes and procedures, but also the ability to execute them with consistency and precision, and the capability to refine and update responses as best practices emerge. Many organizations have few, if any, of these procedures documented. Sometimes they are just monolithic and inflexible, and continue to rely on ad hoc responses over and over again.
Since Gartner’s first analysis of the SOAR space (which was initially defined by Gartner as “security operations, analytics and reporting”), the vendor and technology landscape has evolved. In 2017, many technologies claim the ability to orchestrate incident response, but present some limitations in capabilities that could deliver real overall benefits for the efficacy of an operations team. Examples of these shortcomings include a limited ability to show the big picture of organizations’ state of security or the lack of connectivity to the organization’s ecosystem of tools. Security orchestration and automation have become closely aligned with security incident response (SIR) and general operations processes. Security information and event management (SIEM) technology vendors have incorporated automated response capabilities to various levels of capabilities. Automated response is also appearing in other security technologies as a feature. The lack of centralized capabilities in the above solutions leaves security teams with a responsibility to manually collect and stitch together all this information, and work with manual playbooks for tasks related to each type of incident.
Figure 1 shows a continuous set of activities that can be performed by an SOC team by using SOAR technology. The figure reflects the use of the CARTA strategy for continuous monitoring and visibility.

Figure 1. SOAR Overview

Source: Gartner (November 2017)

338719_0001.png

SOAR Overview

Definition

Gartner defines security orchestration, automation and response, or SOAR, as technologies that enable organizations to collect security threats data and alerts from different sources, where incident analysis and triage can be performed leveraging a combination of human and machine power to help define, prioritize and drive standardized incident response activities according to a standard workflow. SOAR tools allow an organization to define incident analysis and response procedures (aka plays in a security operations playbook) in a digital workflow format, such that a range of machine-driven activities can be automated.

The Evolution of SOAR From 2015 to 2017

In 2015, Gartner described SOAR (then described as “security operations, analytics, and reporting”) that utilized machine-readable and stateful security data to provide reporting, analysis and management capabilities to support operational security teams. Such tools would supplement decision-making logic and context to provide formalized workflows and enable informed remediation prioritization.
As this market matures, Gartner is witnessing a clear convergence among three previously relatively distinct, but small, technology markets (see Figure 2). These three are security orchestration and automation, security incident response platforms (SIRP), and threat intelligence platforms (TIP).

Figure 2. Convergence of SOAR Tools
338719_0002
SOA: security operations automation; TVM: threat and vulnerability management

Source: Gartner (November 2017)

Convergence of SOAR Tools
The majority of solutions that Gartner tracks are mostly related to core security operations functions, such as responding to incidents, which are addressed by existing tooling (for example, a SIEM). SOAR integrates dispersed security data, and provides security teams with the broad functionality to respond to all types of threats. SOAR also enables processes that are more efficient, accurate, and allow for automation for common subtasks or an entire workflow. The primary target for a SOAR solution is the security operations center (SOC) manager and analysts responsible for incident response.
Gartner is also tracking an increasing role of SOAR functionality among TIP vendors. Indeed, SOAR’s central role in the SOC makes it ideally suited to validate the quality of the threat intelligence used in an organization. By confirming alerts as true positive or false positive, SOARs can confirm or infirm the threat intelligence used to come to that conclusion. Likewise, the SOAR can now push validated threat intelligence to all the tools and security controls in the organizations that can take advantage of the indicators of compromise for local enforcement.

Description and Functional Components

SOAR can be described by the different functions and activities associated with its role within the SOC, and by its role with managing the life cycle of incident and security operations:
  • Orchestration — How different technologies (both security-specific and non-security-specific) are integrated to work together
  • Automation — How to make machines do task-oriented “human work”
  • Incident management and collaboration — End-to-end management of an incident by people
  • Dashboards and reporting — Visualizations and capabilities for collecting and reporting on metrics and other information
In the following sections, we will review each of these functions in more detail.
What SOAR is not:
  • Governance, risk and compliance (GRC), where the focus is on managing adherence to compliance frameworks, often based on controls. Gartner has evolved GRC to be called Integrated Risk Management (IRM) now to include both IT risk management and Audit and Risk management.
  • SIEM, which provides reliable log ingestion and storage at scale, as well as normalization and correlation of events for real-time monitoring and the automated detection of security incidents.
  • User and entity behavior analytics (UEBA) or advanced threat detection, which are focused on behavioral and network analysis or the detection of indicators of compromise.
  • Threat and vulnerability management, which provides awareness for the types of threats facing an organization. TVM is focused on identifying, prioritizing and remediating security weaknesses based on potential risk and impact of vulnerabilities.
Drivers for SOAR include:
  • Staff shortage: Due to staff shortages in security operations (see “Adapt Your Traditional Staffing Practices for Cybersecurity”), there is a growing need to automate, streamline workflows and orchestrate security tasks. Also, the ability to be able to demonstrate to management the organization’s ability to reduce the impact of inevitable incidents is ever-present.
  • The explosion of unattended alerts from other security solutions: The process of determining whether a specific alert deserves attention requires querying many data sources to triage.
  • Threats becoming more destructive: Threats destroying data, the disclosure of intellectual property and monetary extortion require a rapid, continuous response with fewer mistakes and fewer manual steps.
  • The need to better understand the intersection of your environment with the prevailing threat landscape: A large number of security controls on the market today benefit from threat intelligence. SOAR tools allow for the central collection, aggregation, deduplication, enrichment of existing data with threat intelligence, and, importantly, converting intelligence into action.

Orchestration

Gartner sees orchestration as the ability to coordinate informed decision making, and formalize and automate responsive actions based on measurement of the risk posture and the state of an environment. SOAR orchestrates the collection of alerts, assesses their criticality, coordinates incident response and remediation, and measures the whole process. One example is the response to reported email that may be suspicious. The end user reports to the SOC a suspicious email, which would require an investigation to confirm whether sender has a bad reputation (through threat intelligence). The use of DNS tools would confirm origin of the email. The analyst would have to extract any hyperlink from the email to validate through URL reputation, or to detonate the link through a secure environment or to run attachments on a sandbox. This process would be done for every reported suspicious email to transform it to an incident. Orchestration provides enough information (automating the data collection into a single place) to help the analyst to review and decide if the situation is suspicious. If the investigation confirms an incident, it would initiate the workflow (playbook) to respond to the incident. Integration with the email system, sandbox and ticket system would provide an automated process to look at the email system to find all messages with a suspicious link or attachment. Then, the system would quarantine email that was sent to other users, while waiting for the decision of deleting or allowing access to quarantined email. Think of the process as conducting an orchestra: a conductor controls multiple musical instruments to produce not just noise, but music. Today, security teams have the problem of having to pick up and play each instrument, but they can’t play many instruments at the same time. It takes time to pick and up put down each instrument. In the world of security operations, this is called “context switching,” and it costs teams time (dead time) to orchestrate and perform each step in a process.
Table 1 outlines the main requirements for orchestration in SOAR tools.

Table 1: Summary of Orchestration Capabilities

Enlarge Table
Capability
Minimum Requirements
Basic integration
A wide range of out-of-the-box integration connectors to other security solutions. Today, the list of supported vendors might not cover all the technologies you have in your environment.
Bidirectional integration
Multiple action types can be described at a high level as “push” or “pull.” “Push” means telling a tool/device to do something. “Pull” means connecting to a tool/device and requesting information it might have. Gartner recommends that end users press their tool vendors to support a full range of both push/pull type capabilities via a well-documented and supported API, simple scripts, or programming language.
Feature-rich integration
Flexible API customization to facilitate the use of all features supported by that security vendor’s product— there are lots of functions (via API) that some security tools offer. Just because your tool is supported does not mean that all the functions are controllable via the security tool’s APIs.
Additionally, if security tools have a lot of functions presented via API, it doesn’t mean the SOAR tool can handle them all. For example, the firewall might only support adding an Internet Protocol (IP) address for blocking, and not a URL. A SOAR tool might not support requesting that a firewall return a response if it has seen a particular IP/URL/file hash.
Abstraction layer
Key to the value of SOAR tools is the availability of an abstraction layer so the analyst does not need to be an expert in specific APIs, scripts or programming language for specific tools. Rather, they can use logic and abstraction while the SOAR tool translates that into machine-specific API calls.
Source: Gartner (November 2017)

Automation

Some vendors use the terms “automation” and “orchestration” interchangeably as synonyms, although they are not the same concept.
Automation is a subset of orchestration. It allows multiple tasks (commonly called “playbooks”) to execute numerous tasks on either partial or full elements of a security process. The security operations teams can build out relatively sophisticated processes with automation to improve accuracy and time to action. For example, a SIEM could check if an IP addresses has been seen, or block an IP address on a firewall or intrusion detection and prevention system (IDPS), or a URL on a secure web gateway. It can then create a ticket in your ticketing system or connect to Windows Active Directory, and lock or reset the password for a user’s account.
Table 2 outlines the main requirements for automation in SOAR tools.

Table 2: Summary of Automation Capabilities

Enlarge Table
Capability
Minimum Requirements
Process guidance
The ability to guide through standardized steps, instructions and decision-making workflow.
Workflow with multilevel automation
Flexible workflow formalization along with a set of predefined actions, as well as enforcement, status tracking and auditing capabilities.
The ability to automate workflows, with flexibility to inject human responses into the workflow.
Playbooks
The ability to code some playbooks, either using some standard language like Python or using some UI that helps the definition of playbooks.
Source: Gartner (November 2017)

Incident Management and Collaboration

Another function of the SOC that the SOAR tools make more efficient is the management of the incidents and the improved collaboration between team members working together on incidents.
This major function is complex. It deals with the life cycle of the incident from the moment an alert is generated, to the initial triage, to the validation of true/false positive, to the hunting and finally the remediation. To carry on this life cycle, the SOC team needs to collaborate and use an efficient collaboration framework, while threat intelligence becomes an integral part of the data points for this process.
Incident management and collaboration comprises several activities, described in the following sections.

Alert Processing and Triage
Two key metrics for information security are the mean time to detect (MTTD) and mean time to respond (MTTR). To accomplish an efficient incident response, SOC analysts need a better way to gather supporting information from a wide range of sources to assess and determine which alerts are real incidents. SOAR technologies gather and analyze various security data. The data is then made available and consumable by different stakeholders and for use cases beyond the original purpose. Triage will ensure that incidents based on information collected from other sources will be prioritized based on criticality and level of impact.
Event collection is commonly achieved through integration with a SIEM platform. Some solutions can automatically generate incidents for investigation. This removes the need to have a human first notice an incident and then invoke a manual step to create the instance of that incident. A key advantage of deploying SOAR technology is the first pass on alerts to reduce the noise or reduce the subsequent workload of analysts.

Journaling and Evidentiary Support
Some SOAR solutions can record information about actions taken, including details of the action itself, the person taking the action and when it occurred. Such journaling can be extremely useful in complex incidents where the following characteristics may apply:
  • There are questions as to whether apparently separate activity may or may not be linked to a broader operation by the adversary.
  • The incident takes place over an extended period, and so records of activity become a reliable corporate memory.
  • There are multiple people working on an incident or action
  • Regulations and other mandates require reports to be produced
Table 3 outlines the main requirements for journaling and evidentiary support in SOAR tools.

Table 3: Journaling and Evidentiary Support

Enlarge Table
Capability
Minimum Requirements
User interface for investigation
Provide investigation timeline/screen to collect and store artifacts of the investigation for current and future analysis.
Help SOC analysts to continue the investigation/response during work shifts among analysts by keeping historical information of the incidents and notes.
Collaboration
Coordination of actions and decisions, particularly when easy communication is not possible (for example, due to time zone differences, work shifts or geographic dislocation).
Coordination of communication with other staff working in the same or related incidents for providing incident updates.
Source: Gartner (November 2017)

Case Management and Workflow
Two forms of security operations automation are often encountered: one focusing on automating the workflow and policy execution around security operations; the other automating the configuration of compensating controls and threat countermeasure implementation. To fully automate or semiautomate these tasks, solutions frequently provide libraries of common and best-practice playbooks, scripts and connectors covering remediation and response actions and processes. These should support the formalization, enforcement and gathering of key performance indicators of security policies. Custom workflow implementation must also be supported.
One of the biggest challenges in IT security operations capturing and retaining this “group knowledge” that exists within environments. Security operations staff often have an overabundance of notes, scripts and documents that describe in extreme detail how to perform a specific task. Additionally, these are often kept in an analyst’s own head, and not fully documented. One of the hidden benefits of SOAR is the ability to codify tribal knowledge into tools, so it can be captured and used by many others. Gartner inquiries shows that workers tend to leave companies after about two to three years, on average. Turnover hurts security operations if key people leave and you no longer have access to institutional memory.
Table 4 outlines the main requirements for case management in SOAR tools.

Table 4: Case Management

Enlarge Table
Capability
Minimum Requirements
Case management
Reconstructed timelines of actions taken and decisions made to provide up-to-date progress reports and to support post-incident reviews.
Collaboration and granular role-based access control and management
Exchange of information between teams, organization units and tiers.
Capturing knowledge base from security analysts
Build an internal knowledge base for incident resolution.
Leading products also provide a library of playbooks and processes for popular use cases, as well as access to a community of contributors.
Source: Gartner (November 2017)

Analytics and Incident Investigation Support
Proper investigation requires centralized tool that helps SOC analysts to quickly identify threats or incidents. During the process of investigation an ability to store artifacts will help through the identification and classification of threats. Those artifacts can also be used later to support further auditing demonstrating chronologically actions and data collected that resulted in a final response. The use of analytics will improve the reduction of false positive based on historical data and determination of level of risk assigned to incidents that will conduct the prioritization among many incidents.
Table 5 outlines the main requirements for analytics support in SOAR tools.

Table 5: Analytics Support

Enlarge Table
Capability
Minimum Requirements
Incident investigation
Correlate incidents, including artifacts, to cross-match activity, and either view or link related incidents. The information should then be surfaced proactively to analysts.
Use forensics to perform a detailed analysis of activity that occurred before and after a security breach.
Source: Gartner (November 2017)

Management of Threat Intelligence
Threat intelligence is becoming a significant resource for detecting, diagnosing and treating imminent or active threats (see “Market Guide for Security Threat Intelligence Products and Services”). Most SOAR tools, like many others in the security market today, include various forms of threat intelligence integration for this purpose. Some are built in, and others are able to be augmented by tools like a TIP. SOAR tools, however, allow not just themselves, but other deployed technology, to make use of third-party sources of intelligence. This can come in various forms: open source; industry leaders; coordinated response organizations, such as Computer Emergency Response Teams (CERTs); and a large number of commercial threat intelligence providers.
TIPs specialize in enabling intelligence-led initiatives in a security program as their base feature set. Today, they offer a sophisticated method for collecting and aggregating threat intelligence for use in security operations. They also have connections to existing tools, such as SIEM, firewall, secure web gateway (SWG), IDPS and endpoint detection and response (EDR).

Dashboards and Reporting

SOAR tools are expected to generate reports and dashboards for at least three classes of persona: analyst, SOC director and chief information security officer (CISO).
Because SOAR tools orchestrate incident response, have bidirectional communication with many other tools in the organization, and empower analysts, they are generating and accessing a lot of very valuable metrics that can be used for several types of reporting.
Table 6 outlines the main requirements for dashboards and reporting in SOAR tools.

Table 6: Dashboard and Reporting Capabilities

Enlarge Table
Capability
Minimum Requirements
Analyst-level reporting
Report on activity for each analyst on metrics such as:
  • Number and types of incidents touched, closed and open
  • Average and mean time for each of the phases of the incident response; for example, incident and triage.
SOC director-level reporting
Report on the efficiency and behavior of the SOC on metrics such as:
  • Number of analysts; number of incidents per analyst.
  • Average and mean time for each of the phases of the incident response; for example, incident and triage.
CISO-level reporting
Report on priorities determined by business context metrics, such as:
  • Risk management: Demonstrate alignment of risks and IT metrics that would have a logical impact on business performance due to lack of controls, impact of incidents and regulations.
  • Efficiency: Demonstrate some level of cost reduction by minimizing incident impact. Key metrics would be MTTD, MTTR and reduction of labor time through automation.
Source: Gartner (November 2017)

Benefits and Uses

SOAR supports multiple activities for security operations decision making, such as:
  • Prioritizing security operations activities: Prioritized and managed remediation based on business context is the main target of security operations.
  • Formalizing triage and incident response: Security operations teams must be consistent in their response to incident and threats. They must also follow best practices, provide an audit trail and be measurable against business objectives.
  • Automating containment workflows: This offers SOC teams the ability to automate most of the activities to isolate/contain security incidents to be conceived by the human decision for the final steps to finalize the incident response.

Adoption Rate

Gartner estimates that today less than 1% of large enterprises currently use SOAR technologies. Higher adoption will be driven by pressing staff shortages, a relentless threat landscape, increasing internal and externally mandated compliance rules (such as mandatory breach disclosure), and a steady growth of APIs in security products. Also, the potential market for SOAR today is large organizations, with managed security service providers (MSSPs) as the primary target. Over time, smaller teams facing the same security threat problems will also begin to adopt SOAR tools. The ongoing skills and expertise shortage and the increasing escalation in threat activity will hasten the move to orchestration and automation of SOC activities.

Risks

Key risks for implementing SOAR include:
  • Market direction: In the longer term, adjacent technologies that are much larger and also focus on security operations (such as SIEM or other threat-focused vendors/segments) are likely to add SOAR-like capabilities. This will be sped up by acquisitions of SOAR tool vendors (for example, IBM acquiring Resilient Systems; Microsoft acquiring Hexadite; FireEye acquiring Invotas; ServiceNOW acquiring BrightPoint Security).
  • Limited integration value: Clients will not be able to leverage a SOAR tool if they lack a minimum set of security solutions in place to provide enough information to make a decision nor automating security tasks. For example, SIEM is often a key piece of technology for the use of SOAR tools due to its complimentary nature. Today, SOAR is most viable for Type A and Type B organizations.2
  • Budget: Clients that are budget-constrained need to juggle conflicting needs of stretched budgets for all of IT, let alone security. They will likely not be early consumers of these technologies and instead will look to invest in more foundational security measures.

Recommendations

IT security leaders should consider SOAR tools in their security operations to meet the following goals.

Improve Security Operations Efficiency and Efficacy

SOAR tools offer a way to move through a task, from steps A to Z. For example, if a process takes an hour or two to perform, having a way to reduce that to 15 minutes offers a significant improvement in productivity. This is beneficial because:
  • Performing the task faster equals better time to resolution. The longer an issue is left unaddressed, the worse it can become, leaving the organization in a potentially risky situation for longer periods of time. Ransomware, for example, is a threat that can get exponentially worse with time.
  • Staff shortages are a critical issue for many organizations. The ability to handle processes more efficiently means that security analysts can spend less time with each incident and will thus be able to handle more incidents, allowing response to more incidents despite fewer resources being available.
  • Automation and orchestration allow your tools to work together to solve issues, versus operating in isolation with no context, which requires a lot of manual work to perform required tasks.

Product Selection

Security and risk management leaders should favor SOAR solutions that:
  • Allow orchestration of a rich set of different security (and nonsecurity) technologies, with a focus on the specific solutions that are already deployed or about to be deployed in an organization.
  • Promote an easy integration of tools not included in the out-of-the-box integration list.
  • Offer the capability to easily code an organization’s existing playbooks that the tool can then automate, either via an intuitive UI and/or via a simple script.
  • Optimize the collaboration of analysts in the SOC; for example, with a chat or IM framework that make analysts’ communication more efficient, or with the ability to work together on complex cases.
  • Have a pricing cost that is aligned with the needs of the organization and that is predictable. Avoid pricing structures based on the volume of data managed by the tool, or based on the number of playbooks that are run per month, as these metrics carry an automatic penalty for more frequent use of the solution.
  • Offer flexibility in the deployment and hosting of the solution, either in the cloud, on-premises or a hybrid of these, to accommodate organizations’ security policies and privacy considerations, or organizations’ cloud-first initiatives.

Better Prioritize the Focus of Security Operations

Prioritization is perennially a key problem. Favor SOAR tools that can help select the top 10 things to be doing today if you have 100 you can potentially do. Efficiency will not fix poor prioritization. SOAR tools can help with this by using external context, like threat intelligence, to help drive processes that have more context so that better decisions can be made in security operations. The goal is working smarter, not harder.

Don’t “Boil the Ocean” — Focus on Critical Security Processes and Use Tools Such as SOAR to Evolve From There

Security teams are regularly tasked with fixing all things, all the time, 24/7, everywhere — but with the same budget and staffing as last year. This is clearly untenable, yet is a persistent observation we have with security operations teams in client inquiries. For security operations, we recommend focusing on executing well on key incident response processes, such as malware outbreak, data exfiltration and phishing. Focus on processes to address these types of situations very well, and then use this well-executed base to expand into other areas.

Representative Vendors

Anomali
Ayehu
CyberSponse
Demisto
DFLabs
EclecticIQ
IBM (Resilient Systems)
Microsoft (Hexadite)
Phantom
Resolve Systems
ServiceNow Security Operations
Siemplify
Swimlane
Syncurity
ThreatConnect
ThreatQuotient

Evidence

1 M-Trends showed MTTD reduced from 146 to 99 days between 2016 and 2017. See FireEye, M-Trends Reports,  “M-Trends 2017.”