social share alt icon

PACE-ML - FEATURES

 

ML ASSESSMENTS, TRAINING AND SKILLING ON MLOPS

 

The Assessment exercise helps enterprise perform structured analysis of their data science practice to identify potential use-cases and toolchains for ML model deployment using MLOps principles. Further, PACE-ML offers playbooks for skilling and training data scientists on development, deployment, and monitoring of ML models.

 

COLLABORATION ENVIRONMENT SET-UP, VERSIONING
AND EXPERIMENT TRACKING

 

PACE-ML has detailed recommendations for successful project set-up, initialization, and deployment. This includes recommendations and best practices for activities like defining the task and scoping out requirements, determining the project feasibility, identifying model tradeoffs, optimum team structures, and recommended architectures. PACE-ML comes with integrated toolchains to enable collaboration across various stakeholders in the ML development including notebook & environment setup. PACE-ML provides data and model versioning using DVC, MLFlow, and version control systems like Git. These tools allow tracking and linkage of data, model, and code across multiple experiments and versions.

 

 

DATA PROCESSING, FEATURE ENGINEERING
AND FEATURE STORES

 

Drawing upon the experience from hundreds of machine learning projects, PACE-ML has pre-built archetypes to choose the right set of feature engineering approaches depending on the type, volume, complexity of data types, business scenarios and industry domain specific variations. The framework also includes modules for Data Augmentation, Imputation, Data Quality Checks, Data Profiling, and Automated Feature Selection. Once feature pipelines are set-up PACE-ML provides integration with a feature store. Feature store acts as a central repository for creating and hosting both offline and online features which can be served to models or used for model training.

 

AutoML FOR MODEL TRAINING AND DEVELOPMENT

 

PACE-ML utilizes modules for AutoML and automated neural network architecture search. These modules enable organizations to quickly determine a benchmark for their ML models. Further, the results of the experiments and model runs are logged centrally for reference later.

 

MODEL GOVERNANCE & MODEL HUBS

 

A centralized model hub helps store all relevant models for the project, making them easy to access, share, and deploy. PACE-ML comes pre-loaded with 100 plus machine learning and deep learning algorithms and models, which address practical use cases. These can be used by end developers, which reduces the time to market. PACE-ML helps in the governance of models through model hubs coupled with modules for experiment tracking, drift detection, data quality, integration checks, and lineage of models.

 

 

MACHINE LEARNING MODEL & SYSTEM TESTING

 

Testing an ML system involves input data validation, model quality, model performance, model validation, explainability, infrastructure testing, pipeline integration testing, API testing, and data drift testing. Model reuse is different than software reuse, as models must be tuned based on input data / scenario.

 

 

RESPONSIBLE AI

 

PACE-ML has the Mphasis Responsible AI framework fully integrated with it. The Responsible AI components are generic and modular, enhancing scalability and repeatability across several use cases. For example, global and local explanation module understands internal logic and model limitations; bias identification and mitigation module assures model fairness; PII redaction module preserves privacy and so on. The ability to log experiments and model versions allow for explanation-accuracy trade-off analysis and auditability.

 

DEPLOYMENT AUTOMATION & WORKFLOW ORCHESTRATION

 

PACE-ML supports automated model retraining and deployment through collaborative pipelines. It provides the ML teams the ability to create deployment pipelines that run across several machines and can be reused by others. It supports model portability across a variety of platforms and should be able to monitor & know when to retrain given scenarios such as data drift. The pipelines created to automate the workflow takes the engineers to a highly available fault tolerant container orchestration engine - Kubernetes. This ensures the models’ capabilities are delivered to the customer with near-zero time latency and allows the ecosystem to dynamically scale up during high traffic loads and scale down during low traffic workloads. PACE-ML comes with multiple workflow orchestrators (e.g., GitHub Actions) which enable developers and teams to build pipelines post arriving at production grade code. This enables engineers to make use of these pipelines to automate key processes such as dockerising code, run unit tests on them and even push containers into dev/test/production environments.

 

PRODUCTION MONITORING, MODEL AND DATA DRIFT DETECTION

 

To ensure high-level performance from deployed ML systems, we need to continuously monitor them. PACE-ML offers a comprehensive production monitoring dashboard which provides the flexibility to track model metrics and explain its behavior, monitor changes in data distribution to raise red flag as data drifts, and track operation metrics to ensure high availability of ML system. PACE-ML works on the following guiding principles for ML system monitoring -

  • Monitoring the performance of the running ML system
  • Identifying potential bottlenecks or runtime red flags
  • Debugging and diagnosing unexpected performance of ML system
  • Evaluating model fairness and identifying any bias in the ML system