Taking a production-centric approach to enterprisewide AI adoption

Most enterprises that implement AI solutions have learned a bitter lesson along the way: The path to organizationwide AI adoption is far from simple, intuitive or easy.

Arguably, the hardest thing about it is the lack of clear guidance. There’s no “how-to” handbook for enterprisewide AI adoption. The absence of a simple, best practice guide has deeply frustrated companies all over the world over for the last decade, resulting in billions of dollars (in both direct investment and in people hours) going down the drain.

The “AI guidebook” wasn’t written yet, because it simply doesn’t exist. These two letters, “AI,” can mean natural language processing or computer vision or time series analysis — each of which can be useful across a broad range of use cases. Combine this with the diversity of organizations that wish to deploy AI, each having their specific data, business needs and pain points, and you get an immensely diverse universe of AI solutions.

Besides shortening the lab-to-field path, a production-centric approach enables a constantly scaling AI implementation with evergreen solutions that yield sustainable value.

So, instead of trying to come up with a universal guidebook for enterprise AI adoption, it’s probably more beneficial to define and tackle the critical elements in deploying these solutions.

The three barriers for enterprisewide AI adoption

AI’s potential business value is immense. It can be used to automate processes, streamline operations and improve product quality. In fact, the promise of AI stands apart from almost all other technology we’ve seen in the past. However, realizing this value requires overcoming three serious barriers: time to value, profitability (and costs) and scale.

Traditionally, the industry benchmark for the duration of an AI project, from initiation to production, is between 12 and 18 months, and requires employing a large team of researchers, ML engineers, software and data engineers, DevOps, QA, data scientists and product/project managers. Having this team onboard entails a huge TCO (total cost of ownership).

The obstacles don’t end there: Once the AI application is deployed, it requires ongoing maintenance to keep the solution “on the rails” and handle the inevitable data drifts, which can easily throw off the trained model. Even once the maintenance costs are accounted for, all this investment covers a single AI application.

When we think of an AI transformation journey, enterprises aim to launch a wide array of AI solutions across all departments and business units — not just a one-off point solution. To uncover the potential of AI and reap its benefits, the ability to scale up AI solutions is critical.

The model-centric approach is broken

The classic approach to AI implementation is model-driven — collecting a relevant dataset that can be used to train the AI model. In most cases, you can find open source code that solves a similar problem, so you don’t need to build this model from scratch. In the best-case scenario, a strong team of data scientists will spend six to eight weeks to train the model to a point where it provides accurate predictions in perfect lab conditions. Even if your model achieves great accuracy, it’s only 5% of the path.

Now you’re facing about a dozen daunting tasks before your AI model can go into production in a scalable manner. You’ll have to start with adding a security and data integration layer, handle small data sets, address performance acceleration, use smart annotation, ensure the model’s robustness and stability in face of dynamic data conditions and many more.

This treacherous path takes approximately 1.5 years and will incur extremely high costs. And what do you get in return? A single AI solution, addressing one very specific use case (that may turn from a mission-critical element into a secondary concern in this time). In other words, the orchestration of these elements is not reusable for other enterprise AI solutions and does not serve the goal of a companywide transformation.

This is the real reason why AI has so far failed to yield ROI across markets and industries. Fortune 1000 companies typically spend about $50 million annually on AI adoption. By now, it should be clear that this money should and can be used in a better way to enable enterprisewide adoption using scalable development of generalized components, as opposed to tailored point solutions that are limited to a single-use case.

As models and ML code become more and more democratized, an unfortunate consequence of this is that machine learning is an infrastructure problem.

Dealing with the orchestration problem

In a model-centric approach, the path to production is long and local, and does not facilitate the infrastructure required for wide AI adoption across departments. This calls for a paradigm shift: From a model-centric approach to a production-centric approach.

Instead of focusing development efforts around a specific AI model, the focus should be on developing the system, or infrastructure, that can support multiple AI models using the same framework. Instead of throwing money at the modeling problem, start investing in infrastructure and orchestration. This requires a shift in mindset more than technological prowess.

Besides shortening the lab-to-field path, a production-centric approach enables a constantly scaling AI implementation with evergreen solutions that yield sustainable value. In the dynamic production environment, data is constantly changing and evolving, creating new types of noise. The current model-centric approach delivers solutions that cannot remain evergreen on their own.

Think about it: As long as the production-centric approach is orchestrated carefully, your solutions are constantly monitored (avoiding the silent failures frequent in AI implementations). Version control and deployment support can help to add and replace AI models without significant overhead, and the continuous learning will enable you adjust the model to the new data. Moreover, adding a human in the loop will provide valuable feedback to continuously improve the prediction outputs.

It’s not that having an AI model is no longer important — the trained model is still the starting point for any successful AI implementation. But it’s just the first step of the journey, and as it becomes driven by open source code, it’s cheaper and easier to obtain. It’s no longer what’s keeping AI solutions from taking off.

Enterprises should take a step back and see the big picture of the AI journey, and start thinking of a systematic way to utilize many AI models in a single, robust framework.