Publications

Deep fakes, a new weapon for psychological operations

16 septembre 2021

With technological progress come new threats and new operational tactics. One new trend is the use of artificial intelligence to create fake videos known as "deepfakes".

The form of modern warfare has changed from what has been historically known. One of the latest wars is a perfect example: the 2020 war in Nagorno-Karabakh. The confrontation between Azerbaijan and Armenia was marked by the massive use of drones. Thus, the centrality of drones in future conflicts seems to be confirmed by this confrontation. However, other forms or terrains of confrontation could also appear in modern warfare. Psychological operations (PSYOP) may well become a new focal point in conflicts. These military operations aim to transmit information to audiences in order to influence their emotions, motivations and objective reasoning. Ultimately, the aim is to influence individuals to impact the behaviour of organisations or governments. These are battles where words and images are the main weapons.

This type of operation can take various forms, such as intoxication operations (fake news), propaganda operations or even concrete actions such as supplying food to a local population accompanied by leaflets in order to gain their trust (cf. the American action in Somalia (5)).

1. What is a deepfake ?

With technological progress come new threats and new operational tactics. One new trend is the use of artificial intelligence to create fake videos known as "deepfakes". Etymologically, the term "deepfake" is a cross between the words "deep learning" and "fake". It is a technic of image and video synthesis based on artificial intelligence. They are often used to impersonate an individual to make him or her give a speech of the creator's choice. For greater realism, these videos are combined and superimposed on existing images and videos. To do this, the designers rely on a machine learning technique using "generative adversarial networks" (GAN). In the end, the combination of existing videos and the videos produced in this way make it possible to obtain a fake video showing one or more people in the context of an event that never actually happened. The effects of such videos can be devastating as they can quickly abuse people's credulity. As the social networks multiply the dissemination of this "fake news", it can have a much greater impact than attempts to deny it.

2. Deepfakes’ threat

Experts or simple enthusiasts can now have access to this technology at a very low cost. With relatively few resources and skills it is possible to manipulate images, videos, sounds or texts. The difference in means between the actors is felt in the difficulty of detecting manipulation. But the gap between professionals and amateurs is narrowing, and it is possible that soon even the most attentive viewers will be deceived when watching deepfakes. Anyway, the number of deepfakes is increasing. According to researchers at Deeptrace, the number of deepfake videos found online in 2019 was around 15,000, compared to just under 8,000 videos found a year earlier (2).

Deepfakes could be used to interfere in an election, sow political chaos or hinder military operations through PSYOPS, making it a national security issue for states.

 

As we have seen, deepfakes are a threat, since even non-state actors could use artificial intelligence as a tool to sow chaos on a large scale within public opinion. The military, especially the US military, is taking the threat seriously and is working on the detection of deepfake images and videos. However, the increasing realism of deepfakes makes their detection more and more complex. Sorting out the real from the fake has become a challenge, so we are witnessing a race between the sword and the breastplate. "Theoretically, if you give a GAN all the techniques we know to detect it, it could pass all those techniques," says David Gunning, the DARPA programme manager in charge of the project. "We don't know if there's a limit. It's not clear. " (1).

Although the web giants (Facebook, Microsoft and Google) are currently working on developing tools to recognise a deepfake, these are not yet good enough. Their correct detection rate is estimated at 66% for the best systems on videos for which it had not been trained beforehand (3). For the moment, the sword has the upper hand over the armour, but for how long?

 

Find all our articles on : https://numalis.com/publications.php

Authors

 

Written by Arnault Ioualalen & Baptiste Aelbrecht

 

Pictures Credits : Felix Mittermeier and Sten Rademaker

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us