Success case bellpi

bellpi is a Colombian startup that offers to the market an omnichannel end-to-end solution for the purchase, sale and financing of used vehicles through an E-commerce.

The challenge:

bellpi had the need to migrate its model for predicting the purchase and sale prices of used vehicles to the cloud, in order to optimize processes and implement good practices in the infrastructure of its e-commerce that would allow them to scale their business and reduce the cost and consumption of resources. They also sought to decouple its architecture, based on an EC2 instance, which was being used to execute the ML model developed to calculate the price of vehicles. They had processes such as:

Web Scraping code

Data cleaning

Processing code

Development code

Training of ML models

With the intention of improving the scalability of these processes and the assertiveness of the value returned by the model, to give a more automatic price.

The solution:

Their priority was to decouple everything they had in the EC2 instance, to have an adequate infrastructure for all operations, we discussed with the client the benefits of AWS, the ease of integration, the low cost and the benefits of having a scalable architecture, benefiting from economies of scale and interaction between AWS services.

A container was created in Fargate and lambda functions to load the different data sources, the original code was divided into several scripts, separating the model cleaning, training and evaluation part, within Sagemaker Studio training notebooks were created of the general model, generation of the price prediction and analysis of the results generated by the model. A table was created in DynamoDB to host the results so that they can be consumed from the website’s APIs.


With the implementation and production of the project, bellpi was able to decouple all the content in a single EC2 instance, automate the loading of different sources of information, prioritizing serverless architecture, migrate its price prediction model to the cloud, allowing more resources to be provided and efficiency when executing the training, automate the generation of predictions to keep your website updated.

This contributed to significant savings in time and resources during web scraping execution and model training, and allowing your data scientist to spend the 16 hours it took for the entire data loading and model execution process to build the model. other models.