Publications

AI and cybersecurity: between reliability and vulnerabilities

16 décembre 2020

What is the place of AI in cybersecurity? While there are definite advantages, we should not forget the flaws.

Cybersecurity is a perpetual duel between the sword and the shield that pits attackers against defenders. It is up to the one who will find new flaws to attack or who will develop new means of defence to better secure. It is therefore a constantly evolving field, enriching its methods and integrating the latest technologies. Today, the field is experiencing a new turning point with the rise of artificial intelligence. It is in full swing, as shown by the resurgence of attacks (or at least the multiplication of their impact) on large groups since late 2019 - early 2020 [1]. AI is proving to be used both as a means of defence and a tool of attack; but it also poses a new terrain of vulnerability. It is therefore interesting to take a closer look at what artificial intelligence can bring to cybersecurity, while keeping in mind the inherent flaws.

How does AI contribute to cybersecurity?

To begin with, AI seems to contribute a lot to cybersecurity protection [2]. It should be noted that current defences, for example based on firewalls or antivirus software, mainly stop only the threats they know about. It is a defence based on the recognition of typical "signatures". However, if a hacker creates a sophisticated and customised APT [3] (Advanced Persistent Threat) attack, it will be virtually undetectable to these defences [4]. This is not necessarily the case for an AI, especially thanks to machine learning.

One of the main advantages of artificial intelligence for cybersecurity therefore concerns its learning capacity. It is thus possible to teach an AI to know how a network usually works. On this basis, it is then possible to identify behaviour that deviates from normal and to issue alerts. These methods are called UBA (User Behavior Analytics). The detection of weak signals is one of the characteristics of AI. It is thus possible to detect suspicious browsing behaviour, passwords that are not entered as usual (typing speed, errors, etc.), suspicious information flows, etc. [5]. This makes it possible, for example, to quickly detect APTs that could have had serious consequences, such as massive data theft. Reacting as quickly as possible is essential in order to limit the impact of an attack and AI makes a significant contribution in this respect.

AI can also train itself to recognise the signatures of certain attacks, so that it can detect them even if they are camouflaged or have been slightly varied. This makes it possible to counter new versions of malware more quickly, and thus to strengthen pre-existing defences. [6]

How does AI contribute to the increase in cyber attacks?

Conversely, AI can also be used to attack. A well-known example is that of deep fakes. Cybercriminals used AI to reconstruct the voice of a company's CEO and steal more than 200,000 euros from the company by deceiving employees [7]. AI performance can be used by attackers to make their attacks more credible, less detectable or more personalised [8]. For example, an AI could analyse the elements contained in phishing emails to determine which ones are the most clickable and then reuse them.

Like any computer system, AI also has flaws that can be exploited in a cyber attack. A first flaw concerns the data needed to train the AI. Regardless of the intended use of AI, a large amount of good quality data is needed to train an AI effectively. Incomplete or under-representative data would reduce the performance of the AI and could lead to errors. Also, as is already the case for image recognition algorithms, it is possible to modify these training bases to make the algorithm make mistakes [9 : see previous article]. In this way, ill-intentioned people can infect databases in advance to allow their attacks to go undetected.

There is another limitation with regard to data, concerning its volume, which is increasingly important and will continue to grow because of the diversity of attacks and the gigantic quantities of data to be analysed. Storage space must therefore be large and programmes will be increasingly slow to run and recognise dangers [5].

The "black box" character of AI also leads to problems related to their explicability and therefore to their understanding. Indeed, discovering the flaws of a system that is difficult to explain may prove more complicated than expected. It then becomes easier to insert unwanted behaviours because the system audit will remain too complex no matter what [4].

AI and cybersecurity: two inseparable concepts

AI is becoming a permanent fixture in the field of cybersecurity, both in terms of defence and attack, and as an object of study. A better understanding of this technology will make it possible to limit its flaws and to try to protect against them. Whatever the scenario, AI can be used to complement current solutions and to support security teams, in order to reinforce them, but in no case to replace them [8]. There is every reason to believe that AI will continue to develop over the next few years, particularly in the field of cyber security. And in the eternal opposition of attack/defence, the offensive and defensive actions that AI allows will remain to be monitored.

 

Find all our articles by clicking here!

Authors

 

Written by Arnault Ioualalen & Baptiste Aelbrecht

 

Pictures credits: 

Image 1 : Tumisu (Pixabay)

Image 2 : Pete Linforth (Pixabay)

Image 3 : Mika Baumeister (Unsplash)

Image 4 : CoolVid-Shows (Pixabay)

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us