قالب وردپرس درنا توس
Home / Tips and Tricks / Getting started with AWS Autoscaling – CloudSavvy IT

Getting started with AWS Autoscaling – CloudSavvy IT

AWS logo

Automatic scaling is very simple in concept – when your servers start to get congested with traffic, the AWS automatic scaling system will spin up new servers to meet the requirements. This can help you both reduce costs and scale quickly.

Automatic scaling saves money

With automatic scaling, you can scale up to meet traffic needs, but also solve a problem with traditional server hosting; You need to build your servers around peak load, but that server can remain most inactive during non-peak hours. You still pay the server̵

7;s hourly rate, even if you do not use it. This is bad for your wallet and also bad for AWS, because they can sell the extra capacity to someone else.

Assume that your application requires 16 vCPU power under peak load. You can accomplish this with one c5.4xlarge example, which costs about $ 500 per month. You can get it for about $ 200 effectively per month if you buy reserved instances in advance with 3-year contracts, but you will still pay full price for an instance designed around your peak capacity. And if your needs change within your contract period, you will stick to that instance until the contract ends.

However, if your application load changes during the day, automatic scaling can help optimize costs. You can use several instead c5.xlarge instances with 4 vCPUs and spin new ones when you need to meet the requirements. With EC2 Spot Instances you can also get your automatic calibration team to buy extra computing capacity at great discounts.

AWS has several auto-scaling services for different products; you can automatically scale Aurora and DynamoDB read replicas and automatically scale Amazon’s Elastic Container Service (ECS). For this article, we will discuss EC2 Auto Scaling, as that is what you probably want to scale anyway.

Build your infrastructure around automation

For automatic scaling to work, you must automate the entire life cycle of the server. The process of creating a server, installing all the dependencies that your app needs to run, installing your code, running your code at startup – everything has to be handled properly for automatic scaling to make sense.

There are two simple ways to do this, and both have different uses.

The first is Amazon Machine Images, or AMI. Your EC2 server is probably already running on an AMI, such as Amazon Linux 2. But AMI is more than an operating system; AMI are images that contain operating systems, programs, user data and configuration, all in one image. You can create your own custom AMI that contains all your programs (such as Nginx, WordPress, PHP, etc.) and associated configuration, and spin up a copy of your existing server.

This method is very useful if you simply reach the limits of a single server and want to scale up, or if you simply want to reduce costs by scaling your servers all day. The main issue is that version control is bad; you need to create a new AMI every time you want to make changes, or automate any way to download updated code and configuration from a tool that git.

The second method is to use containers. Container is a Unix concept that allows applications to be assembled and run in an isolated virtualized environment, while maintaining the speed advantages or running on pure metal. You can think of it as having all the things your application needs to be able to run on a CD; You can burn multiple copies of that CD and run them on multiple servers.

Each time you need to make an update, you simply update the CD and distribute the updated version. With how Docker works, it makes version management quite simple. However, moving an existing application to Docker may require more initial setup than you are comfortable doing, as it requires a significant change in how you develop and use your systems.

We will cover the AMI method in this article, as it is much simpler; but if you go along the container route, you’ll better use Amazon’s managed container services rather than EC2 automatic scaling. You can read our guide to getting started with AWS ECS to learn more.

How to get started

You need a few things to get started. First is the custom AMI. They are relatively easy to create; from the EC2 Management Console, right-click on your current server and choose Image> Create Image. This opens a dialog box that takes a snapshot of your server and creates an AMI from that snapshot. give it a name and description and select “Create Image.”

The Custom AMI opens a dialog to create an AMI on your server to which you give a name and description and then select

Once the AMI has been created (it may take a few minutes), scroll down to the bottom of the EC2 sidebar and select “Start Configuration” under the “Automatic Scaling” tab. Create a new boot configuration and select your custom AMI as the base.

Choose your custom AMI as the base

You should select the instance type that you want to use as a step. For example, if you want to scale up in two vCPU steps, select a 2 vCPU instance. You will do more scaling, but your costs may be better optimized.

Then configure the start tasks. You should make sure to request location agencies, especially if you plan to scale up during the day and reduce at night. Spot instances can be run for up to 6 hours. You must enter a maximum price; You can set this to the hourly cost of the On-Demand version of the instance, and it will always run.

Configure launch information

You can also specify an installation script here under the advanced settings. You can paste it as text or as a file to run.

Enter an installation script

Then you add storage space, select a security group and select a key pair, as usual when creating an EC2 instance (even if this is simply a template).

At the end, you choose to create an automatic scaling group with the newly created boot configuration. Enter a name for the group, original size and select your subnet.

Then configure your scaling principles. You want to select a range to scale between and a measure to use to scale instances, such as average CPU utilization or average network traffic. You can also set CloudWatch alarms to scale instances based on other metrics.

Configure your scaling principles

You must also enter the time in seconds that instances need to be warmed up. if you use AMI this time will be much lower, but you still need to do tests to find out how long it takes.

You can then configure notifications and tags and review your configuration before starting. Note that when you create this group for automatic scaling you will provide servers for you, so be prepared to pay for them.

From the “Automatic scaling groups” tab in the EC2 console, you can view your group’s activity, such as current instances or startup errors. Your group should now scale up and down, depending on the load. You want to keep track of its behavior for the first few days to make sure everything is in order.

When you need to update your servers, you will need to create a new boot configuration with a new AMI and select the new configuration as the configuration for your automatic scaling group.

Source link