The autonomy of AI, can we really talk about it?

06 novembre 2020

Can we really talk about AI autonomy? The notion is different from what our media illustrate.

Artificial intelligence is a complex notion, which has today not always a unique definition. It is often defined as a tool for mimicking cognitive functions specific to humans. Nevertheless, these definitions are often fuzzy and anthropocentric and can cause prejudice to the AI industry. So, it’s not surprising that everyone's mental representation of AI is often fueled byt Hollywood fictions. Understandably the definition of AI is an important problem, and the word chosen will turn out to be crucial. This is what we’re going to focus on in this article. When it is said that AI is autonomous, is this really the case? The answer is no.

What is autonomy in AI?

To get started, we must come back to the etymological origin of the word “autonomy”. The word comes from the Greeks: “auto” and “nomos” which mean: “who makes his own rules”. As a result, it means that something which is autonomous does not follow rules that are imposed on itself, it builds its own set of rules. A population declaring its autonomy does not seek to continue to obey external rules, quite the contrary. This is absolutely not the case with AI.

It should be more relevant to use the word “automatic” or “automated” in order to define a system working without human intervention [1].

Let’s illustrate with the example of the “autonomous” car, it is not at all a system with autonomous operation, it is in fact “heteronomous”, it obeys to external rules. If we want the car to take us to a place, it has no choice but to drive us there by respecting the traffic rules. If it was autonomous, it might decide that the location is not right or that if the road is too congested so it might want to drive on the sidewalk to get there faster.

The questions concerning the "autonomy" of AIs then take on a whole new meaning. Indeed, autonomous systems still do not really exist today. In this case, we might talk about “strong” AI, which is currently not about to see the light of day. 

AI autonomy in weapons systems: where do we stand?

Concerning discussions about AI in the military field, we realize that automatic systems have been around for a long time. SCALP [6] missile case is enlightening. The system receives a mission through a target description and a position where it is supposed to be located. Then the system automatically computes a trajectory, avoidance and identification maneuvers to go as far as validating that the target is well identified and therefore can be engaged as planned. If we take a slightly more provocative example; we can talk about the case of an anti-tank mine that explodes automatically as soon as a sufficiently heavy vehicle runs over it. A human has put it down, and the mine then operates by itself and it is a totally licit system [1]. We can imagine that AI can improve this operation and detonate only if a detected vehicle is indeed a tank and not a truck, or even only on an enemy tank. The system becomes a little bit more intelligent but can still be wrong. On the other hand, the anti-tank mine functioning without AI does not distinguish either of these two cases and constitutes a legitimate case of use of an automated system. So, it’s interesting to wonder why a system using AI would suddenly pose an adoption problem? Yet the debate is very present and the fears that AI raises in the military field, whether rational or not, must be addressed. No one wants weapon systems that can themselves designate their targets and decide on their actions without any rules being imposed by man [2].

Let’s remind us that AI is above all a means to improve human capacities, for instance concerning the analysis of information flows. The main interests of weak AI are that it allows, among other things, efficient data processing, the ability to perform actions more flexibly on a specific subject, or automation of certain relatively repetitive tasks.

Is artificial intelligence really intelligent?

To go further, we can question the meaning of the word “Intelligence” taken from the expression of Artificial Intelligence. Are these systems truly intelligent? It is necessary here to return to the characterization of their functions and ways of proceeding. Currently, AI is very specialized to carry out an assignment but they are limited to this task. To perform, IAs as well as neural networks discover a model, which is trying to generalize data that they previously learnt. AIs don’t become as intelligent as humans. AIs simply process data by applying models that they learnt [3]. AIs remain no less than man-made systems and are therefore the result of their intention. Intention and creativity remain fundamental capacities belonging to humans. In the weak AIs that are designed today these skills have no equivalent. Only the mimicry of their actions leads us to believe incorrectly in their intelligence. Once again, the terms used are confusing, AIs remain automatisms [4].

Automated but not autonomous AI systems

Thus, one must be careful with the choice of words when talking about AI. Systems are not autonomous but automatic. Their intelligence is not really one, it is rather expressed by a very strong specialization that allows it to exceed human capabilities in some specific tasks. It is in fact important to understand fundamentally what AI is in order not to fuel false debates. The international authorities have understood this and have been working for several years on the issues of AI definition and AI standardization [5]. Engaging in dialogue and anticipating how to regulate strong AI is perhaps interesting. But their hypothetical appearance should not pollute the debate on AIs that already exist and deserve our attention.


[1] Cahier de la RDN : La révolution de l’Intelligence Artificielle (IA) en autonomie : 

[2] RDN : autonomie et létalité en robotique militaire : 



[5] Commission européenne : LIVRE BLANC, Intelligence artificielle, Une approche européenne axée sur l'excellence et la confiance : 


Pictures credits: 

Image 1&2 : PIRO4D (Pixabay)


We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us