Visual Capitalist

http://www.visualcapitalist.com/worlds-money-markets-one-visualization-2017/?utm_content=bufferb8997&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Advertisements

DDoS Attack Types

DDoS Attack Types

  1. Volumetric attacks, which are believed to comprise more than 50 percent of attacks launched, are focused on filling up a victim’s network bandwidth. Among the most common volumetric attacks are User Datagram Protocol (UDP) flood attacks, where an attacker sends a large number of UDP packets to random ports on a remote host. UDP floods accounted for approximately 75 percent of DDoS attacks in the last quarter of 2015, according to the Versign DDoS Trends Report. A common form of UDP flood attack relies on reflection and amplification. UDP is a connectionless protocol (that is, it doesn’t require that the two ends of a conversation establish a connection before exchanging data). An attacker can therefore forge UDP packets with fake source addresses, and use those packets to generate reply traffic. By setting the source of the UDP packets to be the IP address of the intended victim, and then sending those packets to various servers for UDP-based applications, the attacker will cause the servers to send reply traffic to the forged source IP address–the victim. This reply traffic is the “reflection” part of the attack. It’s a lot like calling every pizza place in your county, and ordering a lot of pizzas to be delivered to someone you really don’t like. The “amplification” part comes in when you understand that many UDP services generate replies that are much larger than the initial request size. For instance, the Domain Name Service (DNS) has a bandwidth amplification factor of 28 to 54 (the reply to a DNS request can be between 28 and 54 times larger than the request). The Network Time Protocol (NTP) has a bandwidth amplification factor of 556. By combining reflection (the server sends reply traffic to a spoofed source address) with amplification (the reply traffic is a lot larger than the initial request), attackers can do a lot of damage to a victim with very little effort on their part. A number of UDP-based applications and services can be used to generate amplification and reflection attacks, including DNS, NTP, Simple Service Discovery Protocol (SSDP), and Simple Network Management Protocol (SNMP).
  2. Protocol attacks (sometimes also called state-exhaustion attacks) target a weakness in how a protocol operates. A well-known protocol attack is the SYN flood, which targets the three-way handshake mechanism in TCP. When a server receives a SYN packet, this is a signal to the server that another machine wants to open a TCP connection. The server will allocate some of its resources to this half-open connection, and send a SYN ACK packet back to the initiating machine. Under normal circumstances, the initiator will then send an ACK packet to the server, the three-way handshake is complete, and the machines will then exchange data. In a SYN flood attack, an attacker sends a rapid succession of TCP SYN requests–typically from spoofed source IP addresses–to open a connection to a network server. The server sends SYN ACK packets back to the source addresses, which never reply with an ACK. The server keeps the half-open TCP connections around, using up resources, until the server is no longer able to accept any new connections.
  3. Application attacks target weaknesses in how an application works. One well-known application attack is Slowloris, which targets web servers. In a Slowloris attack, the attacker sends HTTP requests to a web server without ever completing the requests. Periodically (and slowly–hence the name), the attacker will send additional headers, thus keeping the request “alive” but not finished. Similar to a SYN flood, this forces the web server to maintain open connections for these partially completed HTTP requests, eventually preventing it from accepting any new connections.

Cloud Migration Example

Cloud Migration Example

This is by far one of the best Cloud Migration Examples, Keeping a copy here for future reference. Using ZeroTier and Consul. You could use VMware NSX and velocloud.com for the same function.

https://tech.channable.com/posts/2017-10-25-how-we-moved-to-google-cloud-using-consul-and-zerotier.html

Prelude

About 6 months ago (in a galaxy pretty close to our office) …

Our old hosting provider was having network issues… again. There had been a network split around 3:20 AM, which had caused a few of our worker servers to become disconnected from the rest of our network. The background jobs on those workers kept trying to reach our other services until their timeout was reached, and they gave up.

This had already been the second incident in that month. Earlier, a few of our servers had been rebooted without warning. We were lucky that these servers were part of a cluster that could handle suddenly failing workers gracefully. We had taken care that rebooted servers would start up all their services in the right order and would rejoin the cluster without manual intervention.

However, if we would have been unlucky, and e.g. our main database server would have been restarted without warning, then we would have had some downtime and, potentially, would have had to manually fail over to our secondary database server.

We kept joking about how the flakiness of our current hosting provider was a great “Chaos Monkey”-like service which forced us to make sure that we had proper retry-policies and service start-up sequences in place everywhere.

But there were also other issues: booting up new machines was a slow and manual process, with few possibilities for automation. The small maximum machine size also started to become an inconvenience, and, lastly, they only had datacenters in the Netherlands, while we kept growing internationally.

It was clear that we needed to do something about the situation.

Which cloud to go to?

Our requirements for a new hosting provider made it pretty clear that we would have to move to one of the three big cloud providers if we wanted to fulfill all of them. One of the important things for us was an improved DevOps experience that would allow us to move faster. We needed to be able to spin up new boxes with the right image in seconds. We needed a fully private network that we could configure dynamically. We needed to be flexible in both storage and compute options and be able to scale both of them up and down as necessary. Additional hosted services (e.g. log aggregation and alerting) would also be nice to have. But, most importantly, we needed to be able to control and automate all of this with a nice API.

We had already been using Google Cloud Storage (GCS) in the past and were very content with it. The main reason for us to go with GCS had been the possibility to configure it to be strongly consistent, which made things easier for us. Therefore, we had a slight bias towards Google Cloud Platform (GCP) from the start but still decided to evaluate AWS and Azure for our use case.

Azure fell out pretty quickly. It just seemed too rough around the edges and some of us had used it for other projects and could report that they had cut their fingers on one thing or another. With AWS, the case was different, since it has everything and the kitchen sink. A technical problem was the lack of true strong consistency for S3. While it does provide read-after-write consistency for new files, it only provides eventual consistency for overwrite PUTs and for DELETEs.

Another issue was the price-performance ratio: for our workload, it looked like AWS was going to be at least two times more expensive as GCP for the same performance. While there are a lot of tricks one can use to get a lower AWS bill, they are all rather complex and either require you to get into the business of speculating on spot instances or to commit for a long time to specific instances, which are both things we would rather avoid doing. With GCP, the pricing is very straightforward: you pay a certain base price per instance per month, and you get a discount on that price of up to 80% for sustained use. In practice: If you run an instance 24/7, you end up paying less than half of the “regular” price.

Given that Google also offers great networking options, has a well-designed API with an accompanying command-line client, and has datacenters all over the world, the choice was simple: we would be moving to GCP.

How do we get there?

After the decision had been taken, the next task was to figure out how we would move all of our data and all of our services to GCP. This would be a major undertaking and require careful preparation, testing, and execution. It was clear that the only viable path would be a gradual migration of one service after another. The “big bang” migration is something we had stopped doing a long time ago after realizing that, even with only a handful of services and a lot of preparation and testing, this is very hard to get right. Additionally, there is often no easy path to rollback after you pulled the trigger, leading to frantic fire-fighting and stressed engineers.

The requirements for the migration were thus as follows:

  • as little downtime as possible
  • possibility to gradually move one service after the other
  • testing of individual services as well as integration tests of several services
  • clear rollback path for each service
  • continuous data migration

This daunting list had a few implications:

  • We would need to be able to securely communicate between the old and the new datacenter (let’s call them dc1 and dc2)
  • The latency and the throughput between the two would need to be good enough that we could serve frontend requests across datacenters
  • Internal DNS names needed to be resolved between datacenters (and there could be no DNS name clashes)
  • And, we would have to come up with a way to continuously sync data between the two until we were ready to pull the switch

A plan emerges

After mulling this over for a bit, we started to have a good idea how to go about it. One of the key ingredients would be a VPN that would span both datacenters. The other would be proper separation of services on the DNS level.

On the VPN side, we wanted to have one big logical network where every service could talk to every other service as if they were in the same datacenter. Additionally, it would be nice if we wouldn’t have to route all traffic through the VPN. If two servers were in the same datacenter, it would be better if they could talk to each other directly through the local network.

Given that we don’t usually spend all day configuring networks, we had to do some research first to find the best solution. We talked to another startup that was using a similar setup, and they were relying on heavy-duty networking hardware that had built-in VPN capabilities. While this was working really well for them, it was not really an option for us. We had always been renting all of our hardware and had no intention of changing that. We would have to go with a software solution.

The first thing we looked at was OpenVPN. It’s the most popular open-source VPN solution, and it has been around for a long time. We had even been using it for our office network for a while and had some experience with it. However, our experience had not been particularly great. It had been a pain to configure and getting new machines online was more of a hassle than it should have been. There were also some connectivity issues sometimes where we would have to restart the service to fix the problem.

We started looking for alternatives and quickly stumbled upon zerotier.com, a small startup that had set out to make using VPNs user-friendly and simple. We took their software for a test ride and came away impressed: it literally took 10 minutes to connect two machines, and it did not require us to juggle certificates ourselves. In fact, the software is open-source and they do provide signed DEB and RPM packages on their site.

The best part of ZeroTier, however, is its peer-to-peer architecture: nodes in the network talk directly to each other instead of through some central server and we measured very low latencies and high throughput due to it. This was another concern that we had had with OpenVPN, since the gateway server could have become a bottleneck between the two datacenters. The only caveat about ZT is that it requires a central server for the initial connection to a new server, all traffic after that initial handshake is peer-to-peer.

With the VPN in-place, we needed to take care of the DNS and service discovery piece next. Fortunately, this one was easy: we had been using Hashicorp’s Consul pretty much from the beginning and knew that it had multi-datacenter capabilities. We only needed to find out how to combine the two.

The dream team: Consul and ZeroTier

Getting ZeroTier up and running was really easy:

  • First install the zerotier-one service via apt on each server (automate this with your tool of choice).
  • Then, issue sudo zerotier-cli join the_network_id once to join the VPN.
  • Finally, you have to authorize each server in the ZT web interface by checking a box (this step can also be automated via their API, but this was not worth the effort for us).

This will create a new virtual network interface on each server:

robert@example ~ % ip addr
3: zt0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2800 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether ff:11:ff:11:ff:11 brd ff:ff:ff:ff:ff:ff
    inet 10.144.111.1/16 brd 10.144.255.255 scope global zt0

The IP address will be assigned automatically a few seconds after authorizing the server. Each server then has two network interfaces, the default one (e.g. ens4) and the ZT one, called zt0. They will be in different subnets, e.g. 10.132.x.x and 10.144.x.x, where the first one is the private network inside of the Google datacenter and the second is the virtual private network created by ZT, which spans across both dc1 and dc2. At this point, each server in dc1 is able to ping each server in dc2 on their ZT interface.

It would be possible to run all traffic over the ZT network, but, for two servers that are anyway in the same datacenter, this would be a bit wasteful due to the (small) overhead introduced by ZT. We, therefore, looked for a way to advertise a different IP address depending on who was asking. For cross-datacenter DNS requests, we wanted to resolve to the ZT IP address, and, for in-datacenter DNS requests, we wanted to resolve to the local network interface.

The good news here is that Consul supports this out-of-the-box! Consul works with JSON configuration files for each node and service. An example of the config for a node is the following:

robert@example:/etc/consul$ cat 00-config.json
{
  "dns_config": {
    "allow_stale": true,
    "max_stale": "10m",
    "service_ttl": {
      "*": "5s"
    }
  },
  "server": false,
  "bind_addr": "0.0.0.0",
  "datacenter": "dc2",
  "advertise_addr": "10.132.0.1",
  "advertise_addr_wan": "10.144.111.1",
  "translate_wan_addrs": true
}

Consul relies on the datacenter to be set correctly if it is used for both LAN and WAN requests. The other important flags here are:

  • advertise_addr the address to advertise over LAN (the local one in our case)
  • advertise_addr_wan the address to advertise over WAN (ZT in our case)
  • translate_wan_addrs enable to return the WAN address for nodes in a remote datacenter
  • bind_addr make sure this is 0.0.0.0 (which is the default) so that consul listens on all interfaces

After applying this setup to all nodes in each datacenter, you should now be able to reach each node and service across datacenters. You can test this by e.g. doing dig node_name.node.dc1.consul once from a machine in dc1 and once from a machine in dc2, and they should then respond with the local and with the ZT addresses respectively.

Given this setup, it is then possible to switch from a service in one datacenter to the same service in another datacenter simply by changing its DNS configuration.

Issues we ran into

As with all big projects like this, we ran into a few issues of course:

  • We encountered a Linux kernel bug that prevented ZT from working. It was easily fixed by upgrading to the latest kernel.
  • We are using Hashicorp’s Vault for secret management. See our other blogpost for a more in-depth explanation of how we use it. In order to make vault work nicely with ZT we needed to set its redirect_addr to the consul hostname of the server it is running on, e.g. redirect_addr = "http://the_hostname.node.dc1.consul:8501". Vault advertises its redirect address in its Consul service definition by default. And this defaults to the private IP in the datacenter it was running in. Setting the redirect_addr to the Consul hostname ensures that consul resolves to the right address. Debugging this issue was quite the journey and required diving into the source of both Consul and Vault.
  • Another issue we ran into was that Dnsmasq is not installed by default on GCE Ubuntu images. We rely on Dnsmasq to relay *.consul domain names to Consul. It can easily be installed via apt of course.

Moving the data

While a lot of our services are stateless and could therefore easily be moved, we naturally also need to store our data somewhere and, therefore, had to come up with a plan to migrate it to its new home.

Our main datastores are Postgres, HDFS, and Redis. Each one of these needed a different approach in order to minimize any potential downtime. The migration path for Postgres was straightforward: Using pg_basebackup, we could simply add another hot-standby server in the new datacenter, which would continously sync the data from the master until we were ready to pull the switch. Before the critical moment we turned on synchronous_commit to make sure that there was no replication lag and then failed over using the trigger file mechanism that Postgres provides. This technique is also convenient if you need to upgrade your DB server, or if you need to do some maintenance, e.g. apply security updates and reboot.

For HDFS the approach was different: Due to the nature of our application, we refresh all data on it at least every 24 hours. This made it possible to simply upload all of the data to two clusters in parallel and to keep them synced as well. Having the data on the new and the old cluster allowed us to run a number of integration tests that ensured that the old and the new system would return the same results. For a while, we would submit the same jobs to both clusters and compare the results. The result from the new cluster would be discarded, but, if there was a difference, we would send an alert that would allow us to investigate the difference and fix the problem. This kind of “A/B-testing” was an invaluable help that helped ironing out any unforeseen issues before switching over in production.

We use Redis mainly for background jobs, and we have support for pausing jobs temporarily in Jobmachine, our job scheduling system. This made the Redis move easy: We could pause jobs, sync the Redis data to disk, scp the data over to the new server, run a few integrity tests, update DNS, and then resume processing jobs.

The key in migrating our data was again to do each service individually, validate the data, test the services relying on it, and then switching over once we were sure everything was working correctly.

Conclusion

The issues and limitations of our old hosting provider made it necessary to look for an alternative. It was important for us that we could move all of our services and data gradually and could test and validate each step of the migration. We therefore chose to create a VPN that would span both of our datacenters using ZeroTier. In combination with Consul, this allowed us to have two instances of each service, which we could easily switch between using only a DNS update. For the data migration we made sure to duplicate all data continuously until we were sure everything was working as intended. If you are looking for an easy way to migrate from one datacenter to another, then we can highly recommend looking into both Consul and ZeroTier.

OSCP Intro Letter

OSCP Intro Letter

Dear Applicant,
Thank you for your interest in Penetration Testing with Kali Linux. Please read this entire email carefully as it contains very important information.

Course Prerequisites

To be successful, you must have basic Linux skills – meaning you need to be able to navigate through the Linux filesystem, run simple commands, edit files, and be comfortable at the command line in general. We also recommend being familiar with Bash scripting with basic Perl, Python, or Ruby skills being considered a plus. A solid understanding of TCP/IP including addressing, routing, and subnetting basics are required as well.

Course Information

The Penetration Testing with Kali Linux (PWK) online course is comprised of downloadable videos totaling over eight hours in length and an approximately 350 page PDF lab guide. If you haven’t already done so, you can view the course syllabus and objectives at the following link:

Penetration Testing with Kali Linux Syllabus [1]

In addition to these study materials, you will receive access to our online labs where you can practice various attack techniques safely and legally. You can purchase VPN lab access for 30, 60, or 90 days in duration. The lab time begins when you receive your course materials as the content is intended to be practiced as you progress through the course. Based on previous student experiences, we recommend you begin with the 60 day option. Please note that once purchased, lab time is non-refundable.

The cost for this course with 30 days of labs is: 800$ USD
The cost for this course with 60 days of labs is: 1000$ USD
The cost for this course with 90 days of labs is: 1150$ USD

The certification exam is included in the fees above.


We only accept payment via major credit cards, debit cards, and e-wallets.

The time required to complete the course exercises depends on your technical background however, the average reported time by our students is a minimum of 100 hours. Note that this estimate only reflects the time to complete the course exercises and does not include the time needed to attack the various lab systems, which can vary from weeks to months depending on background, aptitude, and available time. Generally we find that 60 days of lab time is suitable for the average student.

You will be able to watch the videos and read the lab guide offline, however the VPN labs require a solid Internet connection – high speed ADSL or cable connection (512/256 minimum). Our labs contain various configurations of Windows and Linux machines with specialized software packages and pentesting applications.

Online Lab Access

Our online VPN lab environment is a critical component of the course and you are provided access along with your course materials on your start date. You may not receive your course materials prior to your lab access as you are expected to work through the course exercises in the lab environment.

Lab access is provided as a consecutive block of time and is non-refundable. You may only pause your lab account under exceptional circumstances and only with valid, written justification. When lab time is paused, resources are still allocated in our lab which remain idle that prevent other students from being able to occupy your position in the labs.

Support Terms

Please note that Penetration Testing with Kali Linux is a self-paced and self-directed course that does not have any official support. In order to be successful, a great deal of independent study and research beyond the presented materials is required. You are expected to conduct extra research in order to compromise various hosts or complete the course exercises.

Our student administrators are available primarily to assist with technical issues related to the online labs but can also provide occasional hints or guidance after a student has demonstrated that they have already put in substantial effort before asking for assistance. We do however have active student forums where help might be found if needed. To get a better understanding of the effort and level of work required in the course, we recommend you refer to our Course Reviews [2] page where you will find numerous unsolicited reviews written by our alumni.

The typical administrative hours for the orders department are 1400 – 2200 GMT and our student administrators are typically online from 0800 – 0300 GMT.

Certification Information

The Offensive Security Certified Professional (OSCP) [3] certification challenge is an online exam. You will connect to our exam VPN lab remotely and have 23 hours and 45 minutes to complete the challenge and an additional 24 hours to submit your documentation by email. In addition to meeting the certification exam objectives, you must submit an exam penetration test report in order to be awarded your OSCP designation.

You must schedule the challenge within 90 days of the expiration of your lab time.

Penetration Testing with Kali Linux may qualify you for 40 ISC2 CPE Credits. This applies to students who submit their exercises and documentation at the end of the course or pass the certification challenge. CISSPs can register the Offensive Security training at the ISC2 website. Please note that we cannot register the CPEs on your behalf.

Continue Registration

We open a course every Sunday and recommend that students begin the registration process 15 – 30 days before their desired start date. If you would like to sign up for the Penetration Testing with Kali Linux course, please follow the link below in order to continue your registration. It is very important that you register with your legal name. You will have the opportunity to change this after passing your certification if you would like your certificate to read differently.

Our students usually provide us with a corporate email address or an address that can somehow help provide proof of identity. Email addresses from Internet Service Providers (ISP) or free email providers such as Hotmail or Gmail (including paid versions), are not accepted without a scanned ID.
If you are unable to provide an alternate non-free address that allows us to get basic verification, we will require a scanned copy of your valid government issued ID in color, such as a driver’s license or passport. For IDs in the form of a card, please include a scan of both the front and back of the card.

We need to be able to see your photo, full name, address (if applicable), year of birth and the expiration date of the ID. You may blur the ID number. Expired IDs are not accepted.

You may also send a photo of your ID if a scanner is not available as long as the image is clear and not blurry.

If you choose to send a scanned ID, you may blur the ID number and send it to registrar@offensive-security.com (please do so only after you use the link below and provide the required information).
Note that a registration request with a free email address will be ignored.

Register for Penetration Testing with Kali Linux [4]

The above registration link will be valid for the next 72 hours. You will be required to submit a new request in the future via our website if you do not use this link in the allotted time.

Do not hesitate to contact us with any questions.

Sincerely,
The Offensive Security Team
www.offensive-security.com

[1]: https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf
[2]: https://www.offensive-security.com/testimonials-and-reviews/
[3]: https://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/
[4]: https://www.offensive-security.com/signup.php?md=2cf4fd5e2380e823a225d78c56cf5cc3&ver=1dssv65332

International Unicorn Club: 106 Private Companies Outside The US Valued At $1B+

International Unicorn Club: 106 Private Companies Outside The US Valued At $1B+

In 2013, over 70% of companies that achieved unicorn status were US-based. Each year since 2013 – 2016, that share of unicorns has gone down, and last year, less than half of the unicorns added to the club (42%) were based in the US.

This year to date, the most highly valued non-US based companies to reach unicorn status include China-based companies Toutiao (most recently valued at $11B), Mobike ($3B), NIO ($2.9B), and e-shang Redwood ($2B), Germany-based Otto Bock healthcare ($3.5B), and the first ever Maltese unicorn, VistaJet ($2.5B).

Intl.-unicorns-map-3-image.png

CB-Insights_Unicorn-Trends-Webinar

AWS Artefacts

AWS Artefacts

 

WS Artifact provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA).

 

Aristotle triptych

Aristotle triptych

Tell them what you are going to say, say it and then tell them what you said.

 

  1. Tell them what you will tell them. This is your opener in which you lay out why you are speaking to the audience. Your message should be predicated on two things: what you want to say, and what the audience needs to hear. Too many presenters focus on the first half but not the second. Knowing what your audience needs to hear is critical to the leadership aspect of your message. You are there to provide direction.
  2. Tell them. This section is open ended. It is the time when you pour out all your content, and explain the details. As a leader, it is the best time for you to build your business case. Your message should ring with logic; that is, you need to emphasize the benefits of your points. But important messages also need to resonate with the heart. Put people in the position to feel why what you’re saying is important and how things will be better when they follow through with your ideas.
  3. Tell them what you just told them. Reiterate your salient points. For leaders, this is the opportunity to give people a reason to believe in your idea and in you. And then demonstrate how you and your team are the ones to deliver on the message. That is, if you are a sales person, how you will back up the product. Or if you are a CEO, how you will guide the company through troubled waters.

This is a formula but it need not be formulaic. That is you can imbue the structure with data but more importantly with your personality. Laden it with stories that amplify your points. Season it with numbers, add spice, sprinkle in humor. And relate the message to your audience.

Not only does Aristotle’s triptych work for formal presentation,

DEVOPS / Agile Technical Titles and Skills

DEVOPS / Agile Technical Titles and Skills

 

  • UI Designer
  • Full Stack Developer
  • DevOps Engineers (Puppet, Chef, Ansible, Salt, Docker, Kubernetes, AWS, Azure, Golang)
  • SecOps
  • Cloud Solution Engineers (AWS, Azure, Rackspace, Google Cloud Platform)
  • SysOps and TechOps Engineers (Jenkins, Hudson, GIT, Bamboo, Stash)
  • Linux Systems Engineers (AWS, KVM, VMWare, Nagios, Python, Shell, Ruby)
  • Linux Systems Administrators (Linux, Unix, Oracle, Solaris, Ubuntu, Debian, CentOS, RedHat

CISO Strategy

CISO Strategy

 

What the CISO Should Do to Help the Board Make Informed Decisions Around Security and Risk

  1. Develop and communicate a security mission statement rooted in business enablement
  2. Determine your risk appetite and document your risk tolerance in layman’s terms
  3. Choose a security framework and map initiatives to that framework
  4. Establish unbreakable rules around security responsibility and information sharing
  5. Keep the board updated on security trends and be prepared to discuss specifics, such as how the organization is responding to a specific threat drawing headlines

What the Board Should Do to Support a Culture of Security Awareness and Accountability

  1. Approach and understand cybersecurity as an enterprise-wide risk issue
  2. Learn the legal implications of cyber risks
  3. Access cybersecurity expertise by giving cyber risk discussions adequate time on the board meeting agenda
  4. Set the expectation that management will establish an enterprise-wide risk management framework with adequate staffing and budget
  5. Discuss cyber risks from the perspective of identifying which risks to avoid, mitigate, accept, or transfer through insurance, as well as specific plans associated with each