Remove Automation Remove Categorization Remove Data Drift
article thumbnail

Machine Learning Project Checklist

DataRobot Blog

Discuss with stakeholders how accuracy and data drift will be monitored. Typical data quality checks and corrections include: Missing data or incomplete records Inconsistent data formatting (e.g., mixture of dollars and euros in a currency field) Inconsistent coding of categorical data (e.g.,

article thumbnail

How Dialog Axiata used Amazon SageMaker to scale ML models in production with AI Factory and reduced customer churn within 3 months

AWS Machine Learning Blog

If there are features related to network issues, those users are categorized as network issue-based users. The resultant categorization, along with the predicted churn status for each user, is then transmitted for campaign purposes. By conducting experiments within these automated pipelines, significant cost savings could be achieved.

ML 124
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Schedule Amazon SageMaker notebook jobs and manage multi-step notebook workflows using APIs

AWS Machine Learning Blog

For instance, a notebook that monitors for model data drift should have a pre-step that allows extract, transform, and load (ETL) and processing of new data and a post-step of model refresh and training in case a significant drift is noticed.

article thumbnail

How Vodafone Uses TensorFlow Data Validation in their Data Contracts to Elevate Data Governance at Scale

TensorFlow

The following can be included as part of your Data Contract: Feature names Data types Expected distribution of values in each column. It can also include constraints on the data, such as: Minimum and maximum values for numerical columns Allowed values for categorical columns.

article thumbnail

Improve Customer Conversion Rates with AI

DataRobot Blog

Ingest your data and DataRobot will use all these data points to train a model—and once it is deployed, your marketing team will be able to get a prediction to know if a customer is likely to redeem a coupon or not and why. All of this can be integrated with your marketing automation application of choice. A look at data drift.

article thumbnail

How to Build ETL Data Pipeline in ML

The MLOps Blog

Data Quality Check: As the data flows through the integration step, ETL pipelines can then help improve the quality of data by standardizing, cleaning, and validating it. This ensures that the data which will be used for ML is accurate, reliable, and consistent. 4 How to create scalable and efficient ETL data pipelines.

ETL 59
article thumbnail

Capital One’s data-centric solutions to banking business challenges

Snorkel AI

But there needs to be some priority order by which we consider how to build a feature library, how to group features and categorize them, and then how to join features at different scales—maybe at a customer scale or at a process level. How are you looking at model evaluation for cases where data adapts rapidly? I can briefly start.