Know More



PACE - ML is Mphasis Framework and Methodology to automate multiple stages in the machine learning (ML) pipeline, accelerating the lifecycle of development, deployment, and productionizing of ML algorithms. It is a combination of Mphasis proprietary tools and methodologies along with best in-class third-party as well as open-source tools. The end-to-end framework uses workflows, collaboration platforms and monitoring tools to improves the efficiency and streamline the management of model selection, reproducibility, versioning, auditability, explainability, packaging, re-usability, validation, deployment & monitoring.


PACE-ML is built on MLOPs principles to facilitate a set of practices and activities which enable data scientists and IT operations to collaborate and manage production pipelines of ML applications and services. It enables organizations to improve the quality and reliability of the ML solutions in production and helps automate, scale, and monitor them.




PACE-ML offers a playbook for ML learning, training and certifications for people with varied skills including data scientists, data architects, data engineers, solution architects, project managers and business analysts who focus on data analysis, model development, experimentation & visualization.



PACE-ML comprises of detailed recommendations for project setup and automation of multiple stages in the ML pipeline. This helps in defining the task and scope out requirements, determining the project feasibility, discussing model tradeoffs (accuracy, training time, number of features etc.).



Our framework comprises of developers who leverage past knowledge, results of experiments across versions, peer-to-peer sharing and branch out new variants of experiments. It also provides recommended practices, tools and best practices for collaboration across various areas of design process, and setting up of pipelines including data pipeline, model raining pipeline, model deployment pipeline etc. Automation of pipelines are a critical component for successful, repeatable and scalable ML projects.


This is the process of transforming raw data into features that represents the underlying structure of problems to the ML models. Drawing upon the experience from hundreds of ML projects, PACE-ML has pre-built archetypes to choose the right set of feature engineering approaches depending on the type, volume and complexity of data types, business scenarios and industry domain specific variations.


PACE-ML comprise of a module on automated model selection and training, which is a key enabler in ML projects, to speed up the pace of processes.



Our framework includes validation of input data, model quality, model performance and explainability, and testing of infrastructure, pipeline integration, APIs and data drift.



PACE-ML enables automated model retraining and deployment through a collaborative pipeline, which is a key requirement for ML systems. It creates deployment pipelines that runs across several machines and can be reused by others, supporting model portability across platforms and ensuring monitoring to retrain under scenarios such as data drift.



PACE-ML provides monitoring, auditing and behavior tracking (driven by changing data distributions) of models. This is important as ML model performance metrics need to be regularly monitored to check for any degradation in performance. Change in model behavior from accepted standards need to be alerted for model refresh.





Improves speed and time-to-market for products and services

Reduces the effort and time in building & deploying models

Increases users' confidence in the system

Automated model pipeline management reduces manual interventions, decreases time for deployment and enables continuous delivery

Tracks model, code and data changes and increases collaboration among teams

Allows users to identify biases or defects in the system so that they can be corrected. Improves scrutability as users can tell the system when it is wrong

Monitors to ensure no broken models exist in prouction and responds to performance issues faster

Reduces cost of development through automation & seamless integration

Reduces risks through model explainability and compliance

Scalability and Reliability through enhanced collaboration, monitoring and automated deployments