Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the copy-the-code domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/clients/client9/web84/web/wp-includes/functions.php on line 6121
BlockDFake - VisTeam

Project Status

In progress

Start Date:

2025-02-01

End Date:

2026-01-31

BlockDFake

Funding Instituition

Fundação para a Ciência e a Tecnologia (FCT)

Amount financed: 124 979,95€

Abstract

Deepfake detection in videos and visual media content certification for the Public Administration

The proliferation of generative AI, particularly image and video manipulation, which we call deepfake generation, allows the creation of videos with misleading content, showing persons in false contexts, even making declarations that never happened, or were never recorded. The potential for this technology for misdeeds, illegal usage or systematic population control is not recent, with examples of such usages in elections, political and social discourse (recall the recent case where a teenager in Portugal used these technologies to expose fake pornographic videos of school colleagues, also teenagers).

This technology may also be used for ludic, marketing and commercial means, with clear societal benefits. However, it also presents serious risks for society when used for illicit means. This project aims at mitigating the risks of illegal deepfake usage by creating mechanisms to detect such usage, but also by allowing content creators to employ a signature to mark their content as legitimate. The main idea is to create algorithms that complement both approaches allowing to protect image and video content in the public administration sphere. The image manipulation problem may be approached either by preemptively stopping or detecting it. On the detection side, there is the possibility of creating algorithms to detect fake content, such as morphing and presentation attacks. In this case, these algorithms may be adapted to video, even though the literature shows that this approach is not enough to handle high-quality deepfakes.

Our goal is to use implicit neural representations. We expect that the mathematical properties of said formalisms will allow the video classification based on the inconsistencies of visual content in said representations. The main advantage of this approach is that any manipulations may not be obvious in the pixel space, but may be highlighted by converting it to a continuous function, as pixel-space manipulations are sophisticated enough that it consistently fools human scrutiny. On the other hand, the deepfake problem may be approached preemptively, by marking the frames using robust steganography methods that allow validation by the viewer with a smartphone. In this case, the approach would be based on the expertise of ISR-Coimbra regarding steganography, which will be adapted to video.

The work plan will be oriented to algorithms and in the two approaches mentioned above. Additionally, a second line of work would be the identification of an architecture of a validation and video protection system or service, storing digital content signatures, or deploying protection models as Software as a Service (SaaS). The goal of this architecture design is mainly to pave the way for a future framework to validate visual content in the Public Administration that, on one hand, would allow public institutions to authenticate and certify any image or video content and, on the other hand, this architecture would allow for the fulfillment of compliance rules of AI Act, by allowing content creators to disclose whether the content is real (not generated) or generated by AI systems.

The goals to achieve at the end of the 12 month period are:
– definition of a video protection proof-of-concept algorithm through image watermarking
– definition of a video protection proof-of-concept algorithm through the detection of frame manipulation at any point
– definition of a global system architecture for digital video watermarking and video deepfake detection, in compliance with the AI-Act

DOI: https://sciproj.ptcris.pt/en/176747PRJ

Scientific Coordinator
Guilherme Schardong
Project Manager
Guilherme Schardong

Related Content

Researcher Coordinator, VIS TEAM Leader
Post-Doc Researcher
Post-Doc Researcher
Researcher
PhD Student
Researcher
Researcher
No tagged contente to show
No tagged contente to show
No tagged contente to show
No tagged contente to show

RECENT PROJECTS

FACING2 – Face Image Understanding
VISUAL-ID – Unique Visual Identities in Graphics, Images and Faces
UniqueMark

suggested news

The new weapons against falsification
INCM Innovation exhibition
SDW (Security Document World)

RECENT PUBLICATIONS

A Study on Static Gaits for a Four Legged Robot

Authors: Carlos Queiroz, Nuno Gonçalves and Paulo Menezes
Featured in: CONTROL’2000, Cambridge, UK

Analysis of Two Methods for Estimation of Partial 3D Velocity

Authors: Nuno Gonçalves and Helder Araújo
Featured in: Symposium on Intelligent Robotic Systems - SIRS2001, Toulouse, France

Uncertainty Propagation in Estimation of Partial 3D Velocity

Authors: Nuno Gonçalves and Helder Araújo
Featured in: 10th Mediterranean Conference on Control and Automation, Lisbon, Portugal

Institute of Systems and Robotics Department of Electrical and Computers Engineering University of Coimbra