AWS Elastic Cloud Compute, also known as Amazon EC2 or AWS EC2, is the flagship web service that AWS provides. Certainly, the most popular, and definitely the most commonly used.
It is, essentially, a way to get a resizable compute capacity in the cloud.
Developers get complete control over the computing resources they enable via this service, and they can quickly and easily scale capacity up or down, as and when needed. For organizations, this means that they can spend less money, and still end up with just the right amount of cloud services and computational prowess that their online presence requires.
Amazon launched EC2 on August 25, 2006, in public beta, and it quickly became one of the core parts of AWS. It is currently one of the more better-known components of Amazon Web Services, allowing customers to rent computing resources by the hour in the form of virtual machines.
Instances, as they are called.
The idea of renting computing resources by the hour or is not new — it goes all the way back to the glory days of the 1960s when it was simply not financial feasible for companies or university departments to own a dedicated computer.
In days before cloud really took off, capacity planning required a large amount of time and forward thinking from organizations. Bringing new hardware online was an expensive, time-consuming and multistep process. Renting a virtual private server, while usually quicker than provisioning a physical machine, still came with its own set of challenges and potential delays.
That all changed with the launch of EC2, where all of this was replaced by a single API call.
If there’s one universally accepted truth in technology, it’s that new companies can experience exponential growth month after month. This can lead to service interruptions as system administrators try to ensure that the demands of their users do not surpass their supply of computing power.
At least, that was the case a decade ago.
Both limited resources, and an oversupply can be terminal for organizations. Nowhere more was this the deciding factor than in the failure of many companies in the 2000 Dotcom Bubble, as companies spent a huge amount of money in capital expenses building datacenter capacity to support users who never materialized.
EC2, then, changes the economics of computing by only requiring you to pay for the capacity that you actually use. And on top of that, Amazon provides developers with a range of tools, services, operating system choices, along with connectivity and management options to build their apps in the cloud.
AWS documentation makes the whole process sound all technical and complex, when it really isn’t. Just think of this as a computer that can stretch when you need more resources, and contract when you don’t. In other words, since all this is virtual, you can add extra power in terms of CPU, memory, storage, even graphics processing, as your computing requirements change.
Instances can be launched and terminated automatically based on your current traffic levels. You can design your infrastructure to operate at 80% utilization, as an example, and then scale it up when you need more capacity.
Flexibility is at the heart of this AWS offering, just like all other services that the company provides. Add to that the ease of use, integration with other AWS services, reliability, security, and complete control that EC2 offers, and you have a winner!
Companies large and small rely on the AWS EC2, from startups to giants like Netflix, Expedia and Lamborghini. They can fire up a couple of virtual servers to thousands in a matter of minutes. All while staying within their budget.
We’ll be taking a detailed look at everything Amazon EC2 offers, what an instance is, its components, pricing, types of instances, security, regions, monitoring and migration in future articles, but you can find out more details about the service straight from AWS here.
You can also follow any of the links below