How to operationalize, integrate and deploy AI models in line with businesses value expectations
Modern organized enterprises recognize that the adoption of a data-driven strategy is crucial to compete in an increasingly digitalized market. Data and analytics have become a very high priority, rising to the board level, which sees technologies such as Machine Learning and Artificial Intelligence as an opportunity to increase business capabilities, making processes more efficient and facilitating the spread of new business models.
Far and wide, investment in AI and data management are drastically increasing and new data science projects are underway to build predictive and analytical models for various purposes. However, while companies plan to scale up sophisticated Artificial Intelligence solutions in a reasonable time, the harsh reality is that the adoption of these solutions is often stalled because companies generally focus more on development than on the operationalization of the models. Many non-digital native’s businesses., the adoption of the data science discipline is often begun with numerous self-contained, and fragmented data science teams, committed by and large to developing models of Machine Learning and Deep Learning.
These small teams of data scientists have sprung up in the varied business units with the aim of building models for different business purposes. Furthermore, thanks to the wide availability of new advanced technologies within easy reach for the development of these models, companies, in order to exploit this abundance of wonderful technologies to create ever newer and performing AI solutions at scale, have had to deal with an increasing complexity which impacted production processes and operations. Just thinking of the extensive collection of software tools available to support the Data Scientists to dive into the world of Data Science such as Python libraries, Jupyter notebooks, Spark MLlib, Dask, and other numerous open-source libraries that have sprung up everywhere, based on the new algorithms that have emerged, which allow data scientists to do traditional clustering, anomaly detection, large-scale predictions and to go even further, to do facial recognition or video analysis.
Unfortunately, this approach has been adopted by many companies causing decentralization and fragmentation of data science teams with a consequent slowdown in the development of models and total absence of collaboration between business units. As a result, CEOs, as well as business executives, are dissatisfied with these initiatives as companies are failing to scale AI by accumulating models that are not implemented, not used and not updated, implemented manually and often not in line with expectations on the value that should come from AI.
Companies, therefore, need to adopt solutions that can help them in delivering value in a timely manner and in line with expectations. These capabilities must be designed to support and accelerate the process of development of models and find a faster way to put machine learning models into production, by enabling enterprises to scale and govern their AI initiatives.
Surveys show that Integration and Risk Management are the Top Barrier to operationalize AI models and hence to the success of AI initiatives.
Addressing the gap between model deployment and model governance with ModelOps
Models have long been seen as essential enterprise assets, and AI models are showing their ability to deliver very significant value. Enterprises increasingly understand that to continuously catch this value while managing risk requires ModelOps practices for the age of AI. As a result, they’re investing in ModelOps.
ModelOps is becoming a core business capability, with enterprises investing in creating more efficient processes and systems for operationalizing AI models.
According to Gartner, “Artificial intelligence (AI) model operationalization (ModelOps) is a set of capabilities that primarily focuses on the governance and the full life cycle management of all AI and decision models. This includes models based on machine learning (ML), knowledge graphs, rules, optimization, natural language techniques and agents. In contrast to MLOps (which focuses only on the operationalization of ML models) and AIOps (which is AI for IT operations), ModelOps focuses on operationalizing all AI and decision models”. It is therefore understood that in large enterprises, an effective ModelOps capability accelerates AI initiatives across the company. ModelOps eliminates waste, friction and excess cost, and unleashes the creativity of the business — including professional and citizen data scientists — while protecting the enterprise from potentially unbounded risks. Gartner, in its report “Innovation Insight for ModelOps — 6 August 2020”, focused on the challenges organizations face when deploying, monitoring and governing AI models at-scale and the need for an enterprise ModelOps strategy, point out that “Organizations face significant challenges while building and deploying AI models at-scale — resulting in poor productivity of AI personnel, delayed operationalization and limited value creation. Data and analytic leaders must address these challenges by utilizing ModelOps to become more effective” and moreover “ModelOps lies at the center of any organizations’ enterprise AI strategy, it is an enabling technology that is key to converging various AI artifacts, platforms and solutions, while ensuring scalability and governance”.
- is primarily focused on the governance and life cycle management of AI and decision models (including machine learning, knowledge graphs, rules, optimization, linguistic and agent-based models). Core capabilities include the management of model development environments, model repository, champion-challenger testing, model rollout/rollback, and CI/CD (Continuous Implementation/Continuous Delivery) integration
- enables the retuning, retraining or rebuilding of AI models, providing an uninterrupted flow between the development, operationalization and maintenance of models within AI-based systems
- provides business domain experts autonomy to assess the quality (interpret the outcomes and validate KPIs) of AI models in production and facilitates the ability to promote or demote AI models for inferencing without a full dependency on data scientists or ML engineers.
Stu Bailey, Co-Founder and Chief AI Architect of ModelOp, says “ModelOps is a capability that focuses on getting models into 24/7 production. It’s a capability that must be owned by the CIO’s organization or the technology center of a large organization”.
Unlock the value of AI
AI value is often not unlocked because operationalizing AI is often an afterthought, and the effort to productionize AI/ML models across the org is underestimated. Many teams are still struggling to leverage the full potential of AI in their applications, partly due to the investment in skills, tooling, and platforms needed to support AI lifecycles while meeting enterprise governance requirements. AI operations support is critical to narrowing these gaps by allowing these teams to incorporate AI technologies more easily in their applications.
As stated by Skip McCormick, a Data Science Fellow (BNY Mellon), lots of AI capabilities are still at the stage where it is a lot of potential. Few organizations are simultaneously putting sufficient resources into the infrastructure they will need to have beneath that in a production environment.
Today, to outperform the competition, enterprises are investing in AI. But the benefits of AI can only be recognized once models are properly operationalized.
Since AI tech stacks are constantly evolving, data scientists want to be able to use the best tools for developing and deploying data science models at scale with automated ModelOps infrastructure engineering, and enterprises are generally happy to accommodate this. As a result, the ecosystems emerging to develop, deploy and manage AI in enterprise settings have become complex.
Since the ModelOps approach brings all the players together, several emerging Start-ups as well as enterprise companies offer ModelOps solutions to orchestrate these components collectively in an end-to-end fully automated model life cycle. Let us have a look at the figure below showing how by managing a platform enterprise can govern and scale any AI initiatives.
Powerful platforms like ModelOp Center typically integrate with development platforms, IT systems, and enterprise applications so that businesses can leverage and extend ongoing investments in AI and IT. In this way, Data Scientists can work at scale using the tools they know best.
With platforms like ModelOp Center, enterprises can:
- Accelerate time from model deployment to decision making by 50% or more;
- Uplift model revenue contribution by up to 30%;
- And reduce business risk with an AI governance workflow.
Across many industries and companies, the strategic power of AI has been established thoroughly. This has led to a surge in model creation. But investments in the people, processes and tools for operationalizing models — i.e. ModelOps — has lagged behind. Organizations must create dedicated model operator or model engineer roles to take on day-to-day ModelOps duties.
There is a growing recognition of the function, the problems that it addresses, the opportunities it creates and the investments that need to be made to support it. Like DevOps, ITOps and SecOps before it, ModelOps looks set to grow into a core business function in its own right as global AI use matures.
- Modelop’s website
- 2021 State of ModelOps Report
- Gartner “Innovation Insights for ModelOps” Report
- The AI Engineering Journey
- ModelOps The Key to Operationalizing AI at Enterprise Scale
- Wikipedia ModelOps
- DevOps per i dati e la data science. Verso l’AI always on anche nel business
- ModelOps Is The Key To Enterprise AI
This article first appeared at towardsdatascience.com