AI4TRUST
Have you got a question?
HUMAN AND ARTIFICIAL INTELLIGENCE AGAINST THE DANGERS OF FALSE OR MANIPULATED INFORMATION.
Fake news: a real threat.
The online spread of false or out-of-context content has long been a central topic in public debate and is increasingly worrying for users of the World Wide Web.
Some compelling data about the state of online information and the growing threat of so-called fake news recently emerged from the 2024 Digital News Report — a document published annually by the Reuters Institute at the University of Oxford.
According to the report, 59% of users surveyed say they are concerned about whether the news they find online is true or false. This figure marks a 3% increase compared to the previous year. The concern is particularly strong in countries that held elections in 2024: in the United States, for example, the figure rises to 72%.
When asked about the topics where they believe they have seen false or misleading information, respondents pointed to complex and sensitive issues: 36% mentioned politics, up 7 points from 2023.
In addition to Covid-19 (30%), the economy (28%), and climate change (23%), users also felt misled regarding topics like the Israel–Hamas war (27%).
In this context, the European Union, within the broader Horizon Europe initiative, has set out to create: “…a hybrid system based on cooperation between humans and machines, using advanced solutions powered by artificial intelligence…”
According to the EU’s vision, this system will monitor numerous online social platforms almost in real time, flagging high-risk disinformation content for expert review. It will analyze multimodal (text, audio, visual) and multilingual content using new AI algorithms.
The Main Objectives of AI4TRUST
To foster more informed use of online news—regardless of whether it is shared through news websites, blogs, or social media—the European project AI4TRUST aims to achieve several key goals, which can be summarized as follows:
- Combating disinformation and misinformation online through a hybrid approach that combines advanced computational power with the intervention of human fact-checkers.
In this regard, the EU legislator has drawn a clear distinction between what qualifies as “disinformation” and “misinformation.” Specifically, it has clarified that:
- DISINFORMATION refers to: “false, out-of-context, or manipulated information, published or shared intentionally, and capable of causing harm to individuals or society.”
- MISINFORMATION refers to: “false, out-of-context, or inaccurate information, published or shared without awareness of its falsity and without intent to cause harm.”
- Real-time monitoring of multichannel content (social media and traditional media), multimodal (text, audio, images, video), and multilingual (an estimated 70% coverage of EU countries, with 7 main supported languages: English, French, German, Greek, Italian, Polish, and Spanish).
- Support for media professionals and policymakers through a platform that generates reliable, customizable reports designed to inform and limit the spread of disinformation.
But how can these objectives be achieved in practice?
The key role of artificial intelligence: supporting fact-checking activities.
Thanks to its advanced generative capabilities, artificial intelligence (AI) can significantly amplify the creation and dissemination of disinformation.
A striking example is represented by the so-called “deepfakes”: audio and video files generated by AI-based software that can process real content (such as images and voice recordings) to alter or reconstruct, with impressive realism, the features and movements of a face or body, or even replicate a specific voice. These tools are increasingly used to manipulate information and, in some cases, influence public opinion and users’ political choices.
It is important to remember, however, that artificial intelligence can also provide powerful tools to counter and contain online disinformation.
The goal of AI4TRUST is to develop, by February 2026, a system capable of monitoring various social media platforms in real time, using new AI algorithms to analyze diverse types of content—texts, images, and videos—in multiple languages: English, French, German, Greek, Italian, Polish, and Spanish.
The system will flag high-risk content for expert analysis. Fact-checkers will periodically verify the reliability of the information, updating the algorithms with their feedback.
This will allow for the creation of reliable reports, tailored to the needs of media professionals and political decision-makers. All of this will provide them with trustworthy insights to help counter the spread of disinformation.
The project’s ultimate aim is to enhance the human response to disinformation and misinformation within the European Union, providing researchers and media professionals with cutting-edge AI-based technologies.
So, how is AI4TRUST organized?
The organizational structure of AI4TRUST: the key role of Work Packages.
The European project AI4TRUST is structured around seven Work Packages (WP), each with a specific focus in fighting disinformation.
Let’s look at them in detail:
- WP1 – Project Management
- Coordinated by the Bruno Kessler Foundation (FBK), this work package is the organizational core of the project. It handles overall activity management: planning, timeline compliance, budget, result quality, risk management, and coordination among partners.
Its goal is to ensure that AI4TRUST proceeds efficiently, in line with set objectives and EU regulations. - WP2 – Methodological Design, Data Gathering & Pre-processing
- Also led by FBK, WP2 is responsible for defining the common methodology, gathering data (texts, images, audio, video) from various sources (including social media, news sites, and online content), and pre-processing it. All activities comply with privacy regulations, such as GDPR, to ensure ethical and legal data use.
- WP3 – AI-driven Data Analysis Methods
- This WP develops and trains advanced AI models to analyze text, audio, and visual content. Its goal is to detect patterns, signals, or anomalies that may indicate the presence of disinformation, using multimodal approaches (combining different data types).
- WP4 – Human-Centred Explainability, Interpretation & Policy
- With contributions from FBK and other partners, this package focuses on making AI models explainable: results must be understandable, verifiable, and transparent for end users (journalists, fact-checkers, policymakers). WP4 also develops ethical guidelines and public policy recommendations for the responsible use of the technologies.
- WP5 – Technical Implementation of the Platform & Security Framework
- WP5 deals with cybersecurity, data protection, and permission management, in order to offer a reliable, robust, and regulation-compliant tool.
- WP6 – Piloting, Assessment & Fact Checking
- This work package tests the platform in real-world settings. In various European countries (including Greece, Poland, and Italy), the technology is trialed by professional fact-checkers who use it to verify news and content in real time. Both qualitative and quantitative feedback is collected, performance and impact are measured, and the effectiveness of the entire system is assessed through specific indicators (KPIs).
- WP7 – Communication, Dissemination & Exploitation
- Lastly, WP7 handles everything related to the project’s visibility, outreach, and external impact: the production of informative content, workshops, conferences, activities with the media and institutions. Additionally, work is carried out on the commercial and social valorization of the project results, developing strategies to ensure the platform’s sustainability and future adoption by public bodies, media, NGOs, and other stakeholders.
In short, the described structure creates a linear and integrated workflow: from design and data collection (WP2), to the creation of models (WP3), to interpretability (WP4), to the technical construction of the platform (WP5), to field testing (WP6) and dissemination (WP7), all under the constant supervision of coordination (WP1).
Conclusions: Towards the annihilation of human judgment?
After reading this article, perhaps purists of human discernment might turn up their noses.
Because it’s true, the growing use of artificial intelligence is scary, especially when it involves the way we come to know the world and the historical events that shake its foundations (isn’t that the role of information, after all?).
But if these events were deliberately distorted to appear plausible or even real, would our eyes be able to recognize the subtle line between truth and falsehood?
If we take the 2024 Digital News Report literally, the answer would be “no”; in fact, a growing number of users are worried about encountering—or having already encountered—digital content so well-fabricated that it seems real.
In this sense, as clarified earlier, artificial intelligence plays a dual role: both executioner and defender. On one hand, it is essential in the creation of disinformation and/or misinformation; on the other, the very same AI can be trained to recognize and flag it for us.
Thus, the role of human expertise remains central in “educating” AI and in defining the ethical boundaries it must not cross. Equally central is the role of human judgment in verifying the falsehood of the fake news flagged within the AI4TRUST platform.
Far from being an “artificial” system, the platform created by the Union is instead human—very human.
Perhaps the European legislator, in shaping such a central role for humans’ judgment has taken to heart Descartes’ famous quote:
“Good sense is, of all things among men, the most equally distributed.”
- Oracle Italy S.T.A. s.r.l. Via Giovanni Porzio n.4- Isola B2 80143, Napoli
- (+39) 02 3056 5370
Book a call back
Share this article
Got a question?
Please complete this form to send an enquiry. Your message will be sent to one member of our team.
Related posts


Majorana 1 – Microsoft’s Quantum Chip
On February 19, 2025, Microsoft unveiled the Majorana 1, a quantum chip the size of a palm, marking a historic advancement in technology.


AI: Innovation in Aviation
The sky is no longer the limit — it’s just the beginning. I. Introduction Artificial Intelligence (AI) is rapidly becoming integral to aviation,


Employment Contracts in Business Sales
What Happens to Employment Contracts When Business Changes Hands? When the assets of a business or part of a business are transferred to


Employment Law Services in Albania
Employment law is a crucial aspect of running a business in Albania. Ensuring compliance with the Albanian Labour Code (Law No. 7961/1995) and