vera.ai

VERification Assisted by Artificial Intelligence
Project ID
Funding Organization:
Funding Instrument:
Research and Innovation Action
Start Date:
15/09/2022
Duration:
36 months
Total Budget:
7,046,250 EUR
ITI Budget:
778,750 EUR

Digital disinformation of various types and forms (audio, video, images, and text) poses a serious threat to the functioning of open democratic societies, public discourse, the economy, social cohesion and more. Advances in technology are making the creation of manipulations and deceptions increasingly easier all the time, requiring less skills and expertise. At the same time, it is becoming more difficult to detect what has been digitally altered or manipulated without expert knowledge and skills.

While advances of technology development are used in the area of disinformation production, it is vitally important that technology developments “keep up” when it comes to detecting manipulations and forgeries, therewith countering disinformation.

This is where vera.ai comes in.

It is the aim of the vera.ai project and its partners to develop and build trustworthy Artificial Intelligence (AI) solutions in the fight against disinformation. These are co-created in close collaboration between leading technology experts in the domain and prospective future users–all brought together in the vera.ai project consortium, following a multidisciplinary co-creation approach. All this is to deliver solutions that can be used by the widest possible community such as journalists, investigators, researchers and such like, while also setting the foundations for future research and development in the area AI against disinformation. The expected solutions will deal with different content types (audio, video, images, and text) and do so across a variety of languages. They will mostly be open and accessible to and usable by anyone.

The project team is looking forward to an exciting and challenging undertaking in which all partners will do their best in supporting the fight against disinformation with the support and sensible use of AI.

CERTH has the role of the Project Coordinator, while also being responsible for Work Package 3 (WP3) “Trustworthy AI for Multimedia Content Verification” which includes building novel AI methods that efficiently and robustly assist media professionals with multimodal content verification. The developed solutions include the analysis and enhancement of visual content, the detection of manipulated and synthetic images and videos, the extraction of verification cues from visual content, the detection of “out-of-context” content as well as the use of methods for retrieving near and partially duplicate images and videos.

Consortium

Centre for Research and Technology, Hellas, Greece
University of Sheffield, UK
University of Urbino Carlo Bo, Italy
Institute for Digital Media Technology, Germany
University of Amsterdam, Netherlands
Kempelen Institute of Intelligent Technologies, Slovakia
University of Naples Federico II, Italy
Ecole Normale Superieure Paris-Saclay, France
Athens Technology Centre, Greece
Sirma AI (trading as Ontotext), Bulgaria
Agence France Press, France
Deutsche Welle, Germany
EU DisinfoLab, Belgium
European Broadcasting Union, Switzerland

Contact

Dr. Symeon Papadopoulos
(Scientific Responsible)

Information Technologies Institute
Centre of Research & Technology - Hellas
9th km Thessaloniki - Thermi, 57001, Thessaloniki, Greece
Tel.: +30 2311 257772
Email: papadop@iti.gr

Dr. Vasileios Mezaris
(Scientific Responsible)
Building A - Office 2.11

Information Technologies Institute
Centre of Research & Technology - Hellas
6th km Harilaou - Thermis, 57001, Thermi - Thessaloniki
Tel.: +30 2311 257770
Fax: +30 2310 474128
Email: bmezaris@iti.gr

Skip to content