Is the regulation of artificial intelligence a brake or a lever on innovation?

19 janvier 2023

The standardization of artificial intelligence would be confronted with the principles of innovation, such is the current debate that animates many experts on the subject. But then, what is it really?

Will standardization put an end to technological innovation and cause a new 'winter' for artificial intelligence? (1)

Standardization can slow down the innovation process, but the impacts are mitigated. Rather than being an inhibitor, standardization has often proven to be a vector for innovation.

After defining the terms, this article will start by presenting the arguments for the need for standardization of AI technologies, and then return to the controversy of innovation versus standardization.

1. Definitions

What is innovation?

Innovation is the process of creating something new, different or better, which may include new products, new processes, new ideas or new business models. Innovation can result from improving an existing product or service or from creating a completely new product or service. It can also include improvements in processes, working methods and organizational structures. Innovation is often linked to economic growth and improved quality of life.

For example, in the transport sector, innovations such as steam trains, automobiles and airplanes have greatly reduced travel time and facilitated trade and social contacts.
In terms of artificial intelligence, innovations based on machine learning can be used to diagnose diseases such as skin cancer or Alzheimer's disease, using medical images or test results, allowing doctors to diagnose more quickly and accurately.

What is standardization?

Standardization is the process of defining and establishing standards, specifications, protocols and rules that govern the manufacture, use, performance and safety of a product, service or system. These standards may be established by standards organizations, governments or industry, and may be voluntary or mandatory. Standardization aims to ensure interoperability, quality, safety and compatibility of products and systems.

Contrary to what it might suggest, standardization can facilitate innovation, as for example in telecommunications where standards for mobile phone networks such as GSM have enabled the creation of a global market for mobile phones. This has stimulated innovation in mobile phone technologies and enabled manufacturers to produce devices that are compatible with each other. But what about standardization in artificial intelligence?

2. Why is there a need for standardization in artificial intelligence?

First of all, it is important to understand that standardization in AI is not a limiting exercise, but rather a necessity to ensure the safety of automated systems and to avoid risks to individuals. Critical sectors such as aeronautics and automotive need certifications to ensure the reliability of the systems used (2).

Thus, a predominant issue in artificial intelligence is the use of these systems in critical sectors, where safety is at the heart of the activities. It is well known that AI systems are often compared to black boxes for which it is difficult to explain the functioning and to ensure reliability. However, critical sectors need assurance. For example, is it conceivable to market cars that do not pass the Euro NCAP tests or to fly planes that are not certified by the EASA? Or is it acceptable to market medicines that have not been granted a marketing authorization? Similarly, there is a real need for validation of artificial intelligence, and the standardization of this validation will also build trust in these systems.

There are also other elements pushing for the standardization of AI. Firstly, there is the problem of the biases to which AI is often subject. Most of these biases come from the data that is used to train the machine learning algorithms. The issues of data governance have already been perceived, notably through the RGPD, but there are also other issues that are important: data quality, confidentiality and security. Data quality is a main source of problems related to bias and the ability to respond to this issue is therefore essential. Standardization could enable this aspect to be highlighted, while at the same time enabling the robustness of the algorithms to be validated.

Interoperability is another issue that suggests that standardization would help to improve the situation. Indeed, with the multiplication of AIs, specific development environments are also multiplying. Thus, certain layers of neural networks that might be necessary for one environment will not necessarily be supported in another. This implies a certain amount of pain in the conversions from one environment to another, as well as potential operating risks, especially if the operational environment of an algorithm is not the same as the development environment. In order to make AI more interoperable, it is therefore necessary to agree on common standards that will facilitate the adoption and association of algorithms.

Finally, ethical and legal issues are the last two notions that will be presented to illustrate the merits of AI standardization.
These issues can be revealed with the use of AI for decision-making in hiring or obtaining financial loans for example. For ethical reasons, in order to avoid discrimination, it is important to ensure that AI makes fair and equitable decisions and therefore that there are safeguards against bias.

As for the legal aspect, it is important to address the issue of liability in the event of a conflict due to a malfunction. If something goes wrong with an AI, you need to be able to point to someone who is responsible. But who would be to blame, the AI manufacturer or the user? So, to protect everyone, it seems important to have clear policies and procedures in place for AI, with mechanisms to enforce these policies and deal with any breaches or violations.
It is in this context that standardization also makes sense, as it can help to ensure that developments are carried out in a responsible and ethical manner, to best serve the adoption of AI in society.

Thus, there are many reasons to believe that standardization is needed in artificial intelligence (3). This alignment may simply allow for more operational development of AI in our lives, by allowing this technology to come out of the laboratory and be used more openly. However, it seems necessary to have a secure framework in which trust will be the rule, like the standardization in other fields such as aeronautics or automotive.

3. Understanding the controversy

Despite the real need for AI standardization, there are fears that innovation will be limited by an overly rigid framework. Many people believe that standardization will prevent companies from doing what they want with AI.

It is true that standardization can represent a risk for innovation, by imposing a certain number of rules that would no longer allow thinking "outside the box" and that would limit uses. Moreover, standardization has already limited the development of artificial intelligence in certain sectors, such as aeronautics. It is a fact that current standards do not allow the use of AI in aircraft; this implies that research is slower than in other areas where AI implementation may be simpler. However, the balance between the added value of standardization and the potential risks is unquestionable, especially with regard to the implementation of AI. Quite simply, what is the point of retaining total "freedom" in the development of AI if the use of these systems is impossible in the majority of cases, as it is deemed too risky?

More generally, there may also be an aversion to standardization and the fact that it has always been difficult for some people to be bound by strict rules. In the age of agile, 'rigid' frameworks can be seen as a brake on agile, even though some agile communities recognize that standardization is still necessary and not necessarily antithetical to innovation (4).

In any case, innovation can still take place. In the worst case, it is done in the laboratory without being deployed while waiting for the standardization to adapt to include it. This is exactly the case for AI in aviation or in rail transport, which are waiting for standards to evolve to allow the inclusion of this technology in these means of transport. It is also important to note that standardization should not be seen as a static process but rather as a continuous process. Norms and standards need to be reviewed and updated regularly to ensure that they remain relevant and effective in the face of technological change.

The main fear without a standard would be the marketing of AI whose ethics and robustness cannot be validated, or whose functioning cannot be explained.

Also, solutions exist to limit the impact of standardization on innovation as much as possible (5). For example, it is possible to focus standardization on results and not on processes. That is to say, for biases, for example, it is possible to standardize the fact of having algorithms in AI systems that do not present biases, without standardizing an entire development process to obtain an unbiased algorithm. It seems more important to focus on validation than on design as such.

In any case, it should be noted that standardization, as provided for in the AI-Act, the European standardization project, is only required for systems considered critical (6). This means that they represent a potential risk to humans, such as in medical applications, autonomous vehicles, surveillance systems or potentially discriminatory systems. Clearly, such uses must be subject to extensive controls to ensure that the algorithms are safe and reliable and do not put humans at risk. However, AIs that would not be at risk, such as email processing AIs for example, will have very few controls. Thus, artificial intelligence applications would not be affected in their entirety by standardization.

Finally, the standardization of artificial intelligence will not put an end to the desire to support and encourage innovation. Indeed, artificial intelligence is considered to be a technology of prime importance for many organizations that are ready to help with its development. In France, funds such as the BPI offer support for the development of AI-based innovations (7). This is also the case at the European level with an action plan of the European Commission concerning the development of AI (8). There is a lot of support and it could be even more important in the context of a standardized environment, since, as explained above, there is a strong need for standardization in order to accelerate the deployment of the technology.


To conclude, today the needs concerning the standardization and regulation of artificial intelligence are clearly expressed, in particular to allow the real and wider implementation of these automated systems in society. In AI, standardization is not intended to destroy innovation, but only to establish a framework of trust to facilitate deployments. Currently, international standards are being developed for AI technologies, notably within the ISO with the 24029 series of standards (9). Europe is also developing its own framework, somewhat along the same lines as the RGPD regulation, which is called AI-Act and which will be based on these same ISO standards. The initiatives for the regulation of AI are developing more and more and many of them are expected to expire around 2025. For example, the EU has recently adopted two proposals concerning the liability aspects of AI, facilitating legal action in the event of a conflict (10).


Retrouvez tous nos articles sur :


Ecrit par Baptiste Aelbrecht & Camille Jullian & Jacques Mosjilovic.


We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us