Using Amazon SageMaker to train, test and deploy models is an efficient way of handling the machine learning lifecycle. Being an iterative process, using SageMaker allows data scientists to manage the entire machine learning pipeline with autoscaling provisions, advanced data security, data monitoring, model monitoring, high performance and low cost ML development. Compared to local or on demand set up, using SageMaker can save up to 67% of total resource utilization.
However, deploying custom machine learning models in the cloud such as AWS can be daunting. First, there is not much documentation available for individual use cases and secondly, each model is different.
This tutorial will walk you through the framework for deployment of any custom model in SageMaker using Docker. It will cover how to deploy the model as a Rest API and how to test it through Postman to showcase the model to customers and stakeholders.
What you’ll learn in this webinar:
1. How to create custom ML models in AWS SageMaker
2. How to deploy the model as a REST API
3. All you need to know about AWS services such as SageMaker, ECR (Elastic Container Registry), Lambda, API Gateway and non AWS Services- Docker