AI-CODE

AI services for COntinuous trust in emerging Digital Environments
Project ID
Funding Organization:
Funding Programme:
HORIZON-CL4-2023-HUMAN-01-CNECT
Funding Instrument:
Innovation Action
Start Date:
01/12/2023
Duration:
36 months
Total Budget:
4,969,471 EUR
ITI Budget:
444,375 EUR
Scientific Responsible:

AI-CODE is a Horizon Europe project dedicated to empower media professionals with cutting-edge, generative-AI-driven solutions to support the creation and dissemination of trustworthy and credible online content. The project addresses the rapid advancements in generative AI and next-generation social media—AI-based, decentralized and immersive virtual environments like fediverses and metaverses—which pose unique challenges and opportunities for information integrity.

Generative AI has already demonstrated its potential to influence the creation and spread of information, both positively and negatively. In next-generation social media, this technology could significantly amplify disinformation, highlighting the urgent need for innovative tools and knowledge to ensure media freedom, pluralism, and the dissemination of reliable information.

The primary objective of the AI-CODE project is to build upon advanced research outcomes (tools, technologies, and expertise) from previous and ongoing EU-funded projects on disinformation, creating an innovative ecosystem of services designed to proactively support media professionals in producing trustworthy information using AI. AI-CODE’s main objectives are twofold:

Understanding the evolving landscape: To analyze and anticipate the intersection of generative AI and next-generation social media, and its implications for the (dis)information ecosystem.
Empowering media professionals: To deliver modular AI-based services that assist media professionals in navigating and leveraging emerging digital environments, detecting new forms of content manipulation, and assessing the credibility and reputation of sources.

CERTH will play a significant role in the AI-CODE project by contributing to several key work packages and tasks. As the leader of WP5, CERTH team will oversee the development of AI-CODE services and tools, ensuring their effectiveness and alignment with the project’s goals. The team will develop and provide Trustworthy AI services for detecting synthetic media, addressing threats posed by deepfake technologies. An additional focus will be the development and enhancement of the Media Asset Annotation and Management (MAAM) service, enabling media professionals to efficiently manage and analyse multimedia content while integrating advanced functionalities, such as synthetic media detection and user trust evaluation. Leveraging its extensive expertise in multimedia forensics and trustworthy AI, CERTH team will integrate cutting-edge detection technologies into the project, building on insights and tools developed in prior EU-funded initiatives such as vera.ai, AI4Media, and AI4TRUST.

Consortium

DS TECH SRL, Italy
Engineering – Ingegneria Informatica SPA, Italy
Centre for Research and Technology Hellas (CERTH), Greece
Fondazione Bruno Kessler, Italy
Universidad Politécnica de Madrid, Spain
Athens Technology Centre, Greece
Deutsche Welle, Germany
KInIT – Kempelen Institute of Intelligent Technologies, Slovakia
Debunk EU, Lithuania
European Institute for Participatory Media, Germany
Radboud University, Netherlands
Euractiv Media Network B.V., Netherlands
Centre for European Policy Studies, Belgium

Contact

Dr. Symeon Papadopoulos
(Scientific Responsible)

Information Technologies Institute
Centre of Research & Technology - Hellas
9th km Thessaloniki - Thermi, 57001, Thessaloniki, Greece
Tel.: +30 2311 257772
Email: papadop@iti.gr

Skip to content