Today, AI can help understand voters' expectations as well as strengthen policy strategies like never before.
In the political field, polls have long been used intensively, as they were, until recently, the only industrialized methods of predicting elections. They make it possible to measure the general opinion of the citizens on a given subject while avoiding the need for a large-scale referendum, which would be much more expensive. However, they have also flaws, such as the lack of detailed responses and the unfeasibility of segmenting responses to understand the political nuances of the population represented {1}.
To refine the analysis of opinions, polling organizations are turning to new tools such as AI algorithms. Indeed, the level of prediction obtained thanks to AI can sometimes be extremely accurate.
For example, the Kcore Analytics solution was able to correctly predict the results of the 2020 US presidential elections.
To do so, it analyzed a large number of tweets to provide a real-time estimate of voters' likely behavior. In the end, its prediction was almost identical to the final results (AI Biden: 52.1% and AI Trump: 47.9% versus 52.27% and 47.73%) {2}.
In addition to the accuracy it can provide, AI offers other possibilities that can bring back undecided voters and fight against abstentionism.
For example, AI can be used in election information by providing personalized recommendations on parties or candidates that match voters' political views. Also, by collecting data on past campaigns, parties can analyze it through machine learning to more effectively understand voters' expectations and help guide political actions accordingly. Finally, it helps combat "fake news" by automatically detecting it on social networks and providing information to invalidate it {3}.
However, AI is far from being infallible. AIs have been wrong on many occasions. We can cite two cases concerning the 2016 American presidential elections.
The first one, with AI Polly, was fed with data coming from different social networks but also news sites representing a sample of more than 200 000 voters {4}.
The second one the "Chance of winning presidency" project gave Hillary Clinton the winner of the 2016 presidential elections. And there are many other cases of flop {5}.
The reasons for these failures are multiple. On the one hand, it is difficult for the AI to understand the reasoning of a voter, which is not necessarily based on the candidates' program. On the other hand, the AI has difficulty understanding humor, irony, or second degree, which are widely used on social networks. These tools, largely based on NLP (Natural Language Processing) techniques, are therefore far from being mature even if their potential is great. In this respect, the development of NLU, Natural Language Understanding, is trying to overcome the difficulties of NLP.
In appearance, the use of AI in the political field seems to be a good idea to reinforce the segmentation of the voter base by allowing political parties to better understand the expectations of the latter. Nevertheless, the use of AI in this field raises a real concern about the ethical impacts it may have.
If the exploitation of public data is not new in itself, determining precisely the political opinion of voters through the analysis of personal data represents an unprecedented and very real danger.
One of the foundations of our democratic process remains the anonymous vote, physically materialized by a polling booth in our polling stations. If tomorrow AI allows us to know and identify the voting intentions of everyone, is there still an ounce of anonymity in our act of voting?
This risk was highlighted during the "Facebook-Cambridge Analytica" scandal of 2014 referring to the exploitation of personal data of 87 million Facebook user accounts by the company Cambridge Analytica.
This information was collected without the consent of social network users through a personality test requiring the participant to log into an application.
In this way, Cambridge Analytica had access to all the personal data of the user's Facebook account, which was merged with the answers to the personality test to produce a psychological profile {6}.
Machine learning was used to determine the profiles from, among other things, the analysis of voters' behavior in real-time on social media. Subsequently, these profiles were used to set up a large communication campaign with personalized messages based on predictions about their sensitivity to different arguments {7}.
Moreover, if AI offers the possibility of customizing recommendations to voters to be more in line with their opinions, this risks creating a tunnel effect. The danger is to manipulate voters by blocking information from other sources. The same goes for the fight against "fake news". Indeed, AI can just as easily be used to automatically pollute social networks with "fake news".
Europe has understood the ethical dangers of using AI in this field. To protect the fundamental rights of everyone, the EU has proposed a regulation on AI: The AI Act. The regulation, which will have the force of law in all European countries, classifies AI applications into three categories of risk level.
We can thus find the AI systems specific to the "Administration of justice and democratic processes" in the category of "High-risk systems". Vendors, distributors, and users will have to prove that their systems are compliant and will have to take corrective measures if necessary. In addition, it will be necessary to be able to ensure the relevance of the data used and the transparency of AI {8}.
In the end, AI represents both an opportunity and a danger to improve democratic planning. The opportunity lies in a better understanding of voters' expectations by the political class, reducing abstention rates, and increasing political engagement. In this case, AI can effectively help to analyze public opinion and determine the main issues on which to invest to mobilize the electorate.
However, the performance of AIs on these subjects is yet not sufficient as proven by the multiple prediction failures. Moreover, it raises several ethical questions, especially concerning the use of personal data. If the voting intention is predicted before it can be expressed, what is the point to have voting booths? And if AI can predict election results in advance, won't it influence voters' decisions by acting as a self-fulfilling prophecy? In that case, what is the point to vote?
Following this reasoning, a Japanese candidate for the municipal elections in Tama proposed to use an AI for all his political decisions {9}.
While the proposal may come as a surprise, it is nevertheless considered valid by 18% of French people who believe that "AI could make better choices than elected officials, provided that the final decision is made by a human being" {10}. This still leaves plenty of room for debate on this subject in our societies.
Find all our articles on : https://numalis.com/publications.php
Written by Arnault Ioualalen & Quentin Guisti
Image credits:
Banner image : Digital (unsplash)
Image 1 : Kamala harris (unsplash)
Image 2 : 905513 (pixabay)