With technological progress come new threats and new operational tactics. One new trend is the use of artificial intelligence to create fake videos known as “deepfakes”.
The form of modern warfare has changed from what has been historically known. One of the latest wars is a perfect example: the 2020 war in Nagorno-Karabakh. The confrontation between Azerbaijan and Armenia was marked by the massive use of drones. Thus, the centrality of drones in future conflicts seems to be confirmed by this confrontation.
However, other forms or terrains of confrontation could also appear in modern warfare. Psychological operations (PSYOP) may well become a new focal point in conflicts. These military operations aim to transmit information to audiences to influence their emotions, motivations and objective reasoning.
Ultimately, the aim is to influence individuals to impact the behaviour of organisations or governments. These are battles where words and images are the main weapons. This type of operation can take various forms, such as intoxication operations (fake news), propaganda operations or even concrete actions such as supplying food to a local population accompanied by leaflets in order to gain their trust (cf. the American action in Somalia)1.
Whats is a deepfake?
With technological progress come new threats and new operational tactics.
One new trend is the use of artificial intelligence to create fake videos known as “deepfakes”. Etymologically, the term “deepfake” is a cross between the words “deep learning” and “fake”. It is a technic of image and video synthesis based on artificial intelligence. They are often used to impersonate an individual to make him or her give a speech of the creator’s choice.
For greater realism, these videos are combined and superimposed on existing images and videos. To do this, the designers rely on a machine learning technique using “generative adversarial networks” (GAN). In the end, the combination of existing videos and the videos produced in this way make it possible to obtain a fake video showing one or more people in the context of an event that never actually happened.
The effects of such videos can be devastating as they can quickly abuse people’s credulity. As the social networks multiply the dissemination of this “fake news”, it can have a much greater impact than attempts to deny it.
Deepfakes’ threat
Experts or simple enthusiasts can now have access to this technology at a very low cost. With relatively few resources and skills it is possible to manipulate images, videos, sounds or texts. The difference in means between the actors is felt in the difficulty of detecting manipulation. But the gap between professionals and amateurs is narrowing, and it is possible that soon even the most attentive viewers will be deceived when watching deepfakes.
Anyway, the number of deepfakes is increasing. According to researchers at Deeptrace, the number of deepfake videos found online in 2019 was around 15,000, compared to just under 8,000 videos found a year earlier. Deep Fakes could be used to interfere in an election, sow political chaos or hinder military operations through PSYOPS, making it a national security issue for states2.
As we have seen, deep fakes are a threat, since even non-state actors could use artificial intelligence as a tool to sow chaos on a large scale within public opinion. The military, especially the US military, is taking the threat seriously and is working on the detection of deep fake images and videos.
However, the increasing realism of deepfakes makes their detection more and more complex. Sorting out the real from the fake has become a challenge, so we are witnessing a race between the sword and the shield. “Theoretically, if you give a GAN all the techniques we know to detect it, it could pass all those techniques,” says David Gunning, the DARPA programme manager in charge of the project. “We don’t know if there’s a limit. It’s not clear”3.
Although the web giants (Facebook, Microsoft and Google) are currently working on developing tools to recognise a deepfake, these are not yet good enough. Their correct detection rate is estimated at 66% for the best systems on videos for which it had not been trained beforehand4. For the moment, the sword has the upper hand over the armor, but for how long?
- L’intervention en Somalie 1992-1993; The US military is funding an effort to catch deepfakes and other AI trickery | MIT Technology Review ↩︎
- Qu’est-ce qu’un deepfake et quels en sont les risques ? | Oracle France ↩︎
- The US military is funding an effort to catch deepfakes and other AI trickery | MIT Technology Review ↩︎
- La détection des deepfakes grâce aux algorithmes n’est pas encore au niveau – Numerama ↩︎