There are no more five strikes and out. In 2021, Twitter introduced a five-strike crisis misinformation policy to hinder fake news. This means that users who violate Twitter’s misleading information policy five times will have their account permanently suspended. Since the policy was enacted, this has led to the suspension of more than 11,000 accounts.
This move from the social media giant, and many others, was in response to the stream of disinformation circulating on social media and messaging platforms. Boosted by global events such as the COVID-19 pandemic, climate change, the crisis of western democracies, and the war in Ukraine, finding accurate and trusted sources of information has become more important than ever before.
However, after Elon Musk’s takeover of Twitter, the company–driven handling of disinformation on the platform has changed. In November, they announced that they were ending their policy against COVID-19 misinformation, so it is likely that we will see a surge in false information – with potentially dangerous consequences in moments of crisis and high social emotion. However, the question remains as to what is the right way to deal with it? And answering this requires understanding the phenomenon under different lenses, for instance, social, technical, and legal perspectives.
This is the aim of a new project, entitled REgulatory Solutions to MitigatE online DISinfomation (REMEDIS) that researchers at the University of Luxembourg’s Interdisciplinary Centre for Security, Reliability and Trust (SnT) are now working on. Supported by the Luxembourg National Research Fund (FNR) and the Fund for Scientific Research of Belgium (FNRS), the project comprises an international group of researchers as it explores disinformation from a highly interdisciplinary perspective.
Computer scientists within SnT’s Sociotechnical Cybersecurity (IRiSC) research group, led by Prof. Gabriele Lenzini, have joined forces with social scientists from the Institute of Cognitive Science and Assessment (COSA), co-led by Prof. Vincent Koenig. They are joined by legal researchers from the Université Catholique de Louvain (UCLouvain), and media scientists from the Université Saint-Louis, Brussels.
“The promising way to understand the so-called fake news phenomenon is to look at it from different angles. For example, psychologists can explore why people seem so attracted by conspiracy theories. Computer and data scientists can look at relationships between a piece of news content and how quickly it spreads across the internet. Journalists may be interested in the relationship between news and misinformation, while lawyers can investigate how to regulate the phenomenon without infringing human rights, such as freedom of speech and expression,” explained Prof. Lenzini.
The objective of the project is to provide innovative regulatory frameworks and sociotechnical solutions to understand and mitigate the effects of disinformation. COVID-19 misinformation will be taken as a use case, as several data sets already exist, and it is easier to check facts against official biomedical data. To do so, each discipline brings a different perspective. The legal experts at UCLouvain plan to develop a new European framework to better balance the social responsibility of online platforms with the freedom of expression of users, and aim to propose a legal framework to EU regulators. Social scientists from the University of Luxembourg intend to support responsible sharing of online content by investigating the psychological effects of fake news on users. The objective of media scientists from Université Saint-Louis is to improve the journalistic practices that currently exist for fact-checking, as well as exploring the effect of disinformation on the public.
SnT’s computer scientists aim to develop a technical approach, for example, to analyse the origin and flow of information from a piece of news. The team, which includes Yuwei Chuai, a doctoral researcher, is working on the technical part of the project. They are studying how quickly and widely fake news is shared and replicated, as well as whether this is influenced by emotional content.
“Our approach is to reveal characteristics of the phenomenon, not to decide whether a news item is fake – a word that, per se, has no meaning. Better is to devise a system to let readers decide for themselves. This should happen after a civil discussion that avoids inflammatory language and radicalisation, which unfortunately is what can be observed today on social media. Not that I do consider all news equal, I do not, but as a supporter of critical thinking and of scientific rationality, I believe that the effect of misinformation cannot be mitigated by shouting louder or by silencing people,” explained Lenzini.
To answer these questions, SnT’s cybersecurity experts analyse official news data sets in search of specific metadata that could help to study, for example, the origin of the post, the first time it appeared, the number of reposts, the speed of its distribution, all comments, and discussion about the post. They also plan to detect indicators of emotions, for example, by taking keywords into account. Using all of this information, scientists measure the distribution and information flow of the news item.
The goal is to collect information that could eventually help to develop a legal framework able to suggest how to regulate online disinformation and its impact on the individual as well as on the society.