Publications

AI: the difference between learning and generalising

14 janvier 2021

Does an AI really "learn"? This term is again a misuse of language, AI is more about computational and generalisation capabilities.

Artificial intelligence is often abused and misused. These errors are caused by a tendency to anthropocentrism, which leads to the representation of AI through a human prism. However, even if the notion of "learning" exists in AI, it is to be qualified because it is in fact quite far from the idea that one can have of the notion of reasoning. This term is to be grouped in the pantheon of other misunderstood terms in AI such as "autonomy" or "intelligence" of AIs (see our previous articles {1}). It is worth noting that we are not talking about learning a knowledge or a skill, but about learning a model. A model is nothing more than a combination of numerical values (weights) and an architecture. Under these conditions, it is difficult to make an equivalence between human learning centred on reasoning, and machine learning based on the adjustment of weights in a model.

How to define artificial intelligence?

To better understand, we need to go back to the different notions of artificial intelligence. AIs today are conceived in a so-called weak form, which aims to accomplish very specific tasks, so they are very specialised. They are better than humans at these tasks because they are much faster and able to handle much larger volumes of data in an efficient manner. The dream of general-purpose AI is still a long way from being realised, and the so-called weak AIs form the operational core of AI today. An AI is still a computer program, it is still bounded by their underlying logic. Even though model tuning remains an automated process, which is called learning, it is not intelligence. It is concerned with adapting a computer program like any other to the data presented to it. The fundamental difference that can be observed between a classical program (written by a programmer) and an AI program is in the way the designer's intention is introduced. In a classical program the intention of the programmer can be read (more or less easily) directly in the source code. In the case of an AI, the intention of the programmer is found in the choice of data that he will decide to submit to it to make it carry out its machine learning.

What is machine learning?

Machine learning is a specific approach to AI, based on the use of statistical techniques in the creation of learning algorithms. Thus, based on a lot of data, the learning algorithm will approximate a function that mimics the input data.

How does AI work?

The AI has not learned anything in the true sense of the word, it has derived a model that tends to reproduce the expected behaviour from a dataset. The central idea is that, if the dataset is sufficiently representative, then the AI will (perhaps) be able to generalise its operation correctly to new data. In the same vein, deep learning relies on neural networks where the weights linking the neurons are adapted during learning. {2}

In a way, AI parameters a model through data and a learning methodology. The latter can be made to evolve and modify the representation that has been made of the model {3}. It is then to be hoped that this model will be able to generalise correctly to the rest of the field of employment with which it will be confronted. Indeed, we have understood that AI relies on generalisation from its training data to determine the results. This is the principle behind DeepMind's algorithms such as AlphaGo or Starcraft, which are respectively champions of the game of Go or the video game Starcraft. By training over a colossal number of games, the algorithm almost always manages to get closer to a situation it knows in order to choose the action that is most likely to lead it to victory {4}. The effectiveness of these "learning" or rather "training" approaches therefore depends heavily on the data, which must be sufficient in number, varied and of good quality. {5}

AI does not "learn", at least not in the sense of a human. As Yaan Le Cun, head of AI research at Facebook, reminds us, the learning methods of a human and an AI are quite different. Where a child needs three photos to be able to recognise an elephant in any image, an AI needs thousands. According to him, AI lacks observation and intuition, which would be the essence of intelligence. {6}

For Luc Julia, inventor of Siri, AI is not intelligent because it is content to obey rules. Its learning is standardised, purely calculatory.  AI cannot therefore imitate humans in the strict sense of the word, but it can, on the other hand, enable them to improve their abilities in very specific areas, particularly with regard to data analysis. {7}

In conclusion

Once again, the vocabulary associated with AI is misleading. Machine learning is not human-like learning; nor is the "intelligence" of the algorithm. An AI will be fed a lot of data that will allow it to determine a purely mathematical model to solve a specific problem. If the AI can succeed in obtaining satisfactory results for data that are not part of its training data set, it is thanks to its ability to generalise its model. However, this generalisation always has its limits, the ability to solve situations sufficiently similar to those of the training will not allow it to solve situations that are very far from it. And knowing how to determine the boundary between what is close to or far from known situations remains a difficult problem.

 

Find all our articles by clicking here!

Authors

 

Written by Arnault Ioualalen & Baptiste Aelbrecht

 

 

Images : public domain (Pixabay)

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us