Introducing Amazon SageMaker Model Monitor – Maintain quality of ML models
ML models make predictions by learning patterns from the data that was used during the training of the models. After models are deployed in production, over time, the data in the real-world begins to differ from the data that was used to train the model, leading to deviations in model quality, and eventually less accurate models. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor provides a fully managed experience to monitor models in production, detect deviations, and take timely actions such as auditing or retraining models.
With Amazon SageMaker Model Monitor, you can easily collect prediction requests and responses from your endpoints, analyze the data collected in production, and compare it against your training or validation data to detect deviations. You can use SageMaker Model Monitor’s built-in rules to detect drift right away for structured data sets, add data transformations before you run the built-in rules, or write your own custom rules. Monitoring jobs can be scheduled to run at a regular cadence, for example hourly or daily, push summary metrics to Amazon CloudWatch so you can set alerts and triggers for corrective actions, and support a broad range of instance types supported in Amazon SageMaker.
When you deploy models in Amazon SageMaker and enable Model Monitor with built-in rules and ml.m5.xlarge instance, you get up to 30 hours of monitoring aggregated across all endpoints each month, at no charge.