Amazon AWS issues for Enterprise Workloads (Comparison)
Amazon EC2 can be an appropriate solution for certain types of workloads. It was built for the Web Site for Amazon Retail and not built ground up for Enterprise workloads. However, it is important to highlight some of the weaknesses of EC2 which may make it unrealistic for enterprise workloads. These are discussed in further detail here.
- Does not support Stretching VLANS –Q. Can a subnet span Availability Zones?
No. A subnet must reside within a single Availability Zone
- AWS VPC does not support Multicast or Broadcast Traffic – https://aws.amazon.com/vpc/faqs/
- EC2 runs on commodity cloud.Unrealistic for Enterprise workloads
- Not a managed services – Built on cheap, unreliable infrastructure. They refuse to guarantee uptime. They say redundancy is the customer’s responsibility. So you must code your application to work across multiple zones and data centers. This engineering work represents another hidden cost when comparing do-it-all-yourself clouds with a Managed Cloud in which the vendor builds redundancy into its infrastructure (through features such as RAID storage and redundant power supplies) and takes responsibility for uptime
- Noisy Neighbour – Amazon Retail is the primary customer of EC2 and peak periods are the same as other customers. Amazon retail might be your noisy neighbour. Smart business customers know better. Many high-volume workloads, including modern databases, achieve higher performance and cost efficiency on single-tenant servers. Businesses that run MongoDB or Hadoop in multi-tenant clouds must compete for resources with I/O-hungry neighbors. To boost performance, they have to overprovision. They get lower and less-consistent performance and higher costs than they would on a custom-fit hybrid cloud that runs each workload where it runs best.
- No guarantees on SLAs – There is no guarantees on storage and network performance. As EC2 is offered as commodity infrastructure that is shared among many subscribers, customers maybe be subject to “noisy neighbours”. Customers can pay more for dedicated instance to avoid these neighbours, yet Amazon still can’t guarantee overall performance.
- Unpredictable costs with no Caps – EC2 is not always most cost effective. Because their method of charging per instance hour (which is not to be confused with an hour of CPU usage), additional charges for data transfer for both storage and network traffic, and other unforeseen extraneous charges, there are many use cases where EC2 can cost much more than even dedicated hosted servers.
- No price cap – Customers cannot control their monthly bills, like overage charges with a cell phone, the monthly bill can run up quite quickly, developing 400 page bills in some cases.
- Lack of security and transparency – Enterprises are subject to a variety of rules and regulations regarding information handling. EC2 does not provide a high level of security transparency which can cause companies to become out of compliance if they cannot audit their applications. Customers have no access to logs and can only control the firewall ingress (no egress).
- Moving existing apps into EC2 is not simple and will require complete re-design and customer lock in Architecture. Amazon uses non-standard storage, database, and network solutions. That means existing applications must be turned to work in the EC2 Environment. Once an application is in EC2, there is no built-in tool to manage both private and public cloud workloads in a single tool.
- Move workloads out of EC2 is not supported – Once an application has been designed and fit to work with EC2, it is difficult for customers to move the application out of EC2. In addition, there are significant restrictions on guest OS and workloads because EC2 is not designed to run on other Hyper-visors.
- Business logic is lost in side AWS and not transferable.
- You are responsible for the services to the your customers, if the cloud vendor has a outage that can cause detrimental affects to your brand.
- SSH Key Management and compliance – AWS forced a new authentication method which might become completely insecure. It does not allow a Enterprise to leverage exiting Identify management policy and standards. A new one must be create to manage AWS SSH Keys
- Lock in – To determine your total cost, you need to add several other big expenses: of hiring experts, often round the clock, to manage and secure your cloud; of pulling your developers and managers away from opportunities in your core business; and of overprovisioning for high-volume workloads on shared infrastructure. The burden of managing the cloud falls on you. So you must hire experts in cloud infrastructure, network security, and DevOps, as well as in such specialties as Chef, Puppet, Salt, Ansible, MongoDB, Hadoop, MySQL, Magento, Drupal, WordPress, and SharePoint. Some of your experts will need to be available 24/7.
- Rack Affinity – Some applications require the lowest level of latency. In AWS, You Web Front EC2 instances might be not be in the space physical Rack as your database EC2 instance. If your workload requires the lowest level of latency. This might be a concern as you can not control the physical location of your VM. It could be distruted between 1000 of racks.. No one knows as the AWS Datacenter architecture is hidden from customers
- IOPS and NIC Speeds – AWS can not guarantee consistent NIC speeds and IOPS for the standard offerings, based on shared architecture. Here is a article showing research on AWS NIC speeds – EC2 AIM NIC Speed – http://epamcloud.blogspot.com.au/2013/03/testing-amazon-ec2-network-speed.html
- AWS specific Skills – If you are moving to the AWS, you will need to hire highly skilled and atm rare skilled people who have both high level traditional IT and Virtualizations skills and AWS specific skills. This will increase your head count and wages budget.
- AWS Tipping Point – recent cases from the US shows that there is a tipping point when the number of VM on AWS becomes unviable compared to that of a internal private cloud solutions for large customers. Few large AWS customers have started to build out their own Private clouds.
- AWS Uptime SLA – AWS doesn’t really offer a SLA for its services. The SLA uptime is provided based on its services not to impact to your applications. As per the SLA statement here http://aws.amazon.com/ec2/sla/, they provide 99.95% but equal to or greater than 99.0%. Read the SLA carefully (http://aws.amazon.com/ec2-sla/) they only count “Region Unavailable” as downtime, and what is more they only count it as downtime if the region is down for 5 consecutive minutes. “”Annual Uptime Percentage” is calculated by subtracting from 100% the percentage of 5 minute periods during the Service Year in which Amazon EC2 was in the state of “Region Unavailable.” By my count this mean any downtime of less then 4 minutes is not countable. Also if they do break the SLA they are only in for %10 of the month in which you had largest downtime bill. So if they where down for all of January and your bill was $100 they would apply a $10 credit to your account.
- Data security – You need to consider and implement solutions for Data “in-rest” “in-flight”, Encryption, SSL and IPSEC
- EC2 AIM NIC Speed – http://epamcloud.blogspot.com.au/2013/03/testing-amazon-ec2-network-speed.html