Techcrunch has an interesting article about the top 10 biggest mistakes made with Amazon Web Services. While I don’t want to just copy what they say, I can see a lot of easy mistakes. For those who think in terms of an internal data center that has to scale to meet all future needs and spike, the tendency to over-build is huge. We have to do that in our own data centers. AWS and other cloud services change that model and we need to change with it. That probably also means you have to have some deep thought discussions on just how important complete high availability should be. If Amazon takes it to 99.8 because the infrastructure will stay up and now you only have to worry about software, maybe that’s good enough. Anyway, here’s an excerpt
The 10 Biggest Mistakes Made With Amazon Web Services
Editor’s note: Zev Laderman is the co-founder and CEO of Newvem, a service that helps optimize AWS cloud infrastructure.
Amazon Web Services (AWS) provides an excellent cloud infrastructure solution for both early stage startups and enterprises. The good news is that AWS is a pay-per-use service, provides universal access to state-of-the-art computing resources, and scales with the growing needs of a business. The bad news – AWS can be very hard for early stage companies to onboard, while enterprises usually spend too much time with ‘busy work’ to optimize AWS and keep costs under control.
We launched a private beta of ‘KnowYourCloud Analytics’ a tool that helps AWS users to get to the bottom of their AWS cloud. By gathering data streams from multiple compute resources and crunching this data with its state-of-the-art analytics engine, Newvem enables AWS users to discover potential cost savings, identify security vulnerabilities and gain more control over availability.
Since our private beta’s launch, we’ve watched over 100,000 AWS instances and have seen users make repeated mistakes over their cloud operations. Ssome are simple, but can result in massive security, availability and cost issues within an organization.
Here are the ten most common mistakes you should avoid in order to make the most out of your AWS cloud footprint.
- Picking oversized instances. AWS offers a diverse variety of instance types and sizes for their operation. Although flexible, we found that many users pick instances that are far more powerful than they actually needed, which can lead to unnecessary costs.
- Provisioning too many instances. In addition to size, AWS allows for flexibility in the amount of instances a user needs. As a result they may run too many instances in clusters or load balancers. AWS features an on-demand business model, meaning that you don’t need to kick-off all of cluster notes needed for peak loads. Users can add nodes as needed, but can also automate provisioning with AWS’s auto-scaling functionality within their platform.
- Failing to make the right trade-offs when selecting instance types. AWS has a wide variety of instance types that differ based on use, such as general-purpose servers, CPU or memory intensive workloads, I/O performance, and size. Without proper application benchmarking, it’s very challenging to pick the most suitable instance type. As a result, users may choose instance types which are too big for their needs and much more expensive. Tracking resource utilization and frequently making the relevant instance trade-offs can help to optimize utilization and cost efficiencies.
- Leaving instances running idle. One amazing advantage of AWS is the ability to choose and provision instances based on the operational needs of your business. It’s simply a matter of adding a new server through a simple wizard. However, as a by-product of this flexibility, users easily lose track of their instances and forgot to turn them off, like leaving a room with the lights on. This results in confusion, wasted time trying to figure out the process, and spiraling costs.
Hit Techcrunch for the entire top 10.