Publications

AI and its industrialization?

11 mars 2021

The development of industrialized artificial intelligence within a standardized framework is still a challenge for large companies

The difficulty for large entities to adopt AI systems within their activities is today a major issue in all areas of activity. This classic problem comes from the very strong structuring they adopt to optimize their productivity. Described by Mintzberg, this so-called mechanistic form [1] tends to slow down the adoption of risky technologies both at the level of the production tool and at the management level. Risk control then becomes key to enable adoption within production structures, of tools that have been evaluated first in R&D or experimentation structures. The development of artificial intelligence solutions raises the question of their industrialization, as they are today solutions for which design processes vary from one developer to another

1- Is AI ready to be industrialized?

Today AI is mainly a product designed to respond to a precise and personalized task within a specific framework. There are currently elements to help design that are available in open source (templates, building blocks). However they are not necessarily mature enough to be inserted into the design and validation processes of industries. Indeed, in the absence of a standardized framework, it is difficult to propose tools to promote industrialized design. Indeed, each actor has specific processes for which the tools will not necessarily be adapted. The implementation of common practices is a determining point concerning the implementation of AI industrialization. For that standardized processes and tools during the design of the AI system can guarantee a certain quality of the final product. It can also offer a significant time saving during the development phase of the AI system [2].

Today, any IT tool that is embedded in a system must generally meet two requirements that can be simplified in this way: it must do what it is supposed to do correctly, and it must do it for the right reasons. Two themes can be drawn from these two requirements: on one hand the level of performance (often called robustness) of the system must be assured; on the other hand the system must be sufficiently explainable to be able to control the way it does it.

Therefore the "black box" aspect of machine learning algorithms hinders their industrialization because of their lack of explicability. The fact that at first glance it is not easy to explain how these systems work makes their adoption more complex. How can we be sure that an algorithm is satisfactory enough to be industrialized if we cannot understand how it works?

2- The need for an AI conceptualization process

In practice, one of the problems comes from the fact that there is not yet an AI conceptualization process derived from a repository. However, there are steps in this process that are sufficiently mature to see generic tools and methods emerge. We can for example mention development environments such as Tensorflow, Keras or PyTorch that allow AI engineers to train AI. However. These environments do not (yet?) standardize AI development processes and each user chooses how to use them. These platforms can also pose some sovereignty issues. Indeed, their governance and design is mostly managed by large American digital companies. Even if projects such as those mentioned above aim to propose a common framework on part of the design process, none of them is totally satisfactory and does not reach consensus. The standardization approach should be extended to the entire design process. This will then provide an opportunity to take a step towards industrialization by enabling the implementation of shared validation processes.

3- The difficulty for companies to implement their AI systems

Currently, these issues make it difficult for companies to move from the conceptualization and development stage of an AI system in the form of a PoC (Proof of Concept) to the stage of putting the project into production. Within large entities, economic performance and the guarantee of a return on investment are the priorities when setting up new projects (cost-driven project). This does not encourage risk-taking, especially when it comes to emerging technologies such as AI [3]. If we look more closely, few PoCs today reach the production phase [5]. This can be explained by the fact that PoCs do not necessarily meet the expected performances. Especially in terms of robustness once the system is put into real conditions. These differences in performance between the two can lead to the abandonment of the industrialization project and thus to the pure and simple loss of the investments made [4].

A standardization framework for AI system development is essential. This can be established through standards giving recommendations on this subject. Today, the lack of harmonization of validation processes is blocking the industrialization of AI. Yet these processes come to demonstrate the proper functioning of an AI system, and assess their level of reliability. So they help to give confidence in this technology. In addition, the validation phase enables us to check that the performance corresponds to that expected in a real environment. In addition, the validation phase must enable the new AI system to meet the regulatory obligations of the sector in which it operates (automotive or aeronautics for example). The lack of a standardized validation framework hinders the establishment of regulatory obligations and therefore the industrialization of trusted AI.

 

Find all our articles on : https://numalis.com/publications.php

Authors

 

Written by Arnault Ioualalen & Baptiste Aelbrecht & Léo Limousin

 

Pictures credits :

Image 1 : Charles Deluvio

Image 2 : Med Badr Chemmaoui

Image 3 : Michael Dziedzic

Image 4 : Michael Dziedzic

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us