Considerations Before Migrating Your
Machine Learning Model to AWS

Advanced analytics is becoming increasingly important within organizations, they are supporting Artificial Intelligence (AI) techniques to optimize and automate their operational processes. Some of the business needs that can be solved with AI / ML in organizations are: demand prediction, fraud detection, document checking and classification, customer classifications based on their risk levels, facial and voice recognition of employees, streamline sales processes in e-commerce with personalized recommendations, improve customer service, among others.


Many of these models are created on Data Sceintists local devices without allowing them to be run automatically or without the ability to scale to the sizes that companies require to be run.

Within AWS there are already certain ready to use AI services such as Amazon Forecast that allows to generate demand predictions based on historical data or Amazon Rekognition that analyzes images identifying emotions, objects, people, among others.

In our experience in different industries, we have observed that many organizations do not have defined frameworks (Artificial Intelligence frameworks) where end-to-end workflows are carried out, along with security and governance controls thats manages risks such as non allowed acceses, data without encryption, keep data clean to work the model, autoscaling of machines without the need for human intervention, replication and data backup.

This is where Amazon Sagemaker can be key, as it is a fully managed service that enables developers and data scientists to prepare, build, train, and implement machine learning models at scale; however, it is necessary to take into account some considerations so that the use of this platform provides good results.

First of all, knowing the use case of the model, that is, once the business needs to be solved through AI / ML has been decided and defined, it is necessary to evaluate whether it is possible to find a solution with some other service within AWS. For example, if the objective is to generate personalized recommendations in an e-commerce for a group of users, you could consider using Amazon Personalize, but if you not only want to use a service (such as a black box) or inside your organization you have internal capabilities of Advanced Analytics and, on the contrary, they want to have more control of the model and be able to modify parameters for a potential improvement, the next thing would be to choose to work with one of the integrated AWS Sagemaker algorithms in case one fits the use case.

If any of the above options is not viable and it is best to develop the model from scratch, the programming language to be used must be evaluated, since Sagemaker allows working with Python and R only. With the above defined, it is important to take into account other factors such as:

  • The type of instance (or virtual machine) most suitable according to the requirements of the project.
  • How much memory is required, if it is necessary to have GPU or only CPU
  • In which region within AWS you are going to work.
  • Options to run the ML model on demand or permanently.
  • Scalability of the model.

Amazon SageMaker not only improves the speed of machine learning implementation for companies, but also all phases of machine learning.

If you want to understand how to carry out an agile project by migrating your Artificial Intelligence (AI) models with AWS or you want to know more about our experiences and AWS services, contact us here.