قالب وردپرس درنا توس
Home / Tips and Tricks / The best services for running machine learning models on AWS – CloudSavvy IT

The best services for running machine learning models on AWS – CloudSavvy IT



AWS logo

Machine learning is a huge industry, and it has a lot of support for AWS. We discuss the best services for building, training and running both custom and preconfigured machine learning models on the AWS platform.

SageMaker

SageMaker is AWS ‘fully managed machine learning suite, designed to replace all manual work with training and inference configuration servers. From SageMaker, you can create and train models using data sets you provide, with all your work stored in a “notebook”

;. This is the most complete experience you will find at AWS for running machine learning models.

SageMaker handles the creation of preconfigured training instances automatically, which can save a lot of money wasted time and configure an expensive training instance. SageMaker also has a marketplace for algorithms, similar to Amazon Machine Images, that you can run on the platform. Some are free, while others cost an hourly rate to drive.

Once you have a model, the distribution is quite simple. All you have to do is add your model to your endpoint configuration,

configuration for sagemaker distribution

And select the instance type (and optional Elastic Inference accelerators) you want to use.

settings for the sagemaker model

You can also use the model in Batch Transform jobs to run conclusions in an entire dataset and store the results in S3.

But while SageMaker is “free to use”, it’s not exactly free to use. SageMaker only allows you to distribute to special instances, denoted by “ml.“Prefix. These are actually the same as regular EC2 instances, but with one important difference – they cost 40% more across the board. If you use SageMaker, it is the fee you have to pay. SageMaker supports Spot Instances, which may help reduce costs, but it will still be more expensive than EC2.

While SageMaker allows the use of Elastic Inference accelerator add-ons, they are also subject to the same 40% price increase.

Elastic inference + EC2

If you prefer to set things up yourself, or want to save some money on the too expensive SageMaker instances, there is always regular ol ‘EC2. This gives you the freedom to configure your servers however you want and have full control over the installed services.

Education models are generally one pulp more intense than driving slope. This is where SageMaker can have an advantage – being a fully managed service, you only pay for the time you actually spend, not time to wait to start, configure servers with your data and quit after that. Still, EC2 can be stopped and started at your choice, and Spot Instances are perfect for this task, which does not make it much of a problem in practice.

But running inference in production often does not require the full power of an entire GPU, which is expensive to run. To combat this, AWS provides a service called Elastic Inference that allows you to rent GPU accelerator extensions for existing EC2 instances. These can be connected to instances of any type and loaded per hour based on the power of the accelerator, which basically gives a completely new SKU of GPU instances under the powerful (and expensive) p3 line up.

Although Elastic Inference is not a computing platform on its own, you can speed up your GPU acceleration to a fraction of the cost. The cheapest full-GPU p3 example, p3.2xlargecosts $ 3.06 per hour to run and comes with 8 cores, 61 GB of RAM and 16 TFLOPS GPU performance. For an accurate comparison with GPU-only EI accelerators, we subtract the vCPU and RAM costs. In a similar way spec m5.4xlarge costs $ 0.768, so the estimated cost to AWS of selling a single Tesla V100 GPU is about $ 2,292, give or take, which is about $ 0.143 per TFLOP. The cheapest EI accelerator, which provides a unique TFLOP performance, costs $ 0.120 – 16% reduce above the EC2 price. 4 The TFLOP option is even better: a reduction of 40% compared to EC2.

AWS also provides preconfigured environments for running machine learning with Deep Learning AMI. These come pre-installed with ML frames and interfaces like TensorFlow, PyTorch, Apache MXNet and many others. They are completely free to use and are available on both Ubuntu and Amazon Linux.

Elastic inference accelerators also support automatic scaling, so you can configure them to scale up to meet the growing demand and scale down at night when not in much use.

AWS own machine learning services

While these services do not allow you to run your own custom models, they do provide many useful features for applications that use machine learning below. In a way, these services are a front end for a machine learning model that AWS has already trained and programmed.

AWS Personalize is a general purpose recommendation engine. You give it a list of products, services or articles and enter user activity. It spits out recommendations for new things to suggest to that user. This is based on the same technology that provides recommendations on Amazon.com.

AWS Lex is a fully managed chatbot service that can be configured with custom commands and routines, powered by the same technology behind Alexa. Chatbots can be text only or can be fully interactive voice bots with AWS Transcribe for speech-to-text and AWS Polly for text-to-speech, both of which are standalone services.

AWS Recognition performs image recognition in images and video, a common machine learning task. It can recognize the most common objects, create keywords from images and can even be configured with custom labels to further enhance the detection capabilities.


Source link