Leandro Cruz

Leandro received his licentiate degree in Math from Universidade Estadual do Norte Fluminense Darcy Ribeiro (2006), and BSc from Universidade Cândido Mendes (2009). In 2011, he received the master degree, in Math - option computer graphics, and in 2015 the PhD degree in Math, both from Instituto Nacional de Matemática Pura e Aplicada (IMPA). During his PhD, he also visited for one year the LIRIS (Laboratoire d'InfoRmatique en Image et Systèmes d'information, em Lyon, França). During 2015 and 2016, he was a postdoc at IMPA, developing research about creating a visual representation for textures and pieces of music. He joined to VIS Team, at University of Coimbra, in February 2017.

Projects

Card3DFace

This project intends to create a 3D face printing system on cards. As for printing on polymer cards,...

TrustStamp

This project intends to develop verification tools to be applied on INCM trust stamps, to confirm au...

UniqueMark

This project aims to improve the safety of INCM's contrasting marks in precious metal artefacts (the...

TrustFaces

The TrustFaces project derives from the TrustStamp project, completed in June 2018, in partnership w...

UniQode

Este projeto é a continuação do projeto TrustStamp, alargando o seu âmbito e permitindo dar resposta...

Publications

An Application of a Halftone Pattern Coding in Augmented Reality

Presentation of a coding system using a halftone pattern (with black and white pixels) capable to be integrated into markers that encode information that can be retrieved a posteriori and used in the creation of augmented reality applications. These markers can be easily detected in a photo and the encoded information is the basis for parameterizing various types of augmented reality applications.

  • Date: 30/11/2017
  • //
  • Featured In: SIGGRAPH Asia 2017
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Bruno Patrão, Leandro Cruz, Nuno Gonçalves
  • //
  • DOI: 10.1145/3145690.3145705
  • //
  • Download File

Halftone Pattern: A New Steganographic Approach

Presentation of a steganographic technique to hide a textual information into an image. It is inspired by the use of dithering to create halftone images. It begins from a base image and creates the coded image by associating each base image pixel to a set of two-colors pixels (halftone) forming an appropriate pattern. The coded image is a machine readable information, with good aesthetic, secure and containing data redundancy and compression.

  • Date: 03/07/2018
  • //
  • Featured In: Eurographics 2018
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Bruno Patrão, Leandro Cruz, Nuno Gonçalves
  • //
  • Download File

Exemplar Based Filtering of 2.5D Meshes of Faces

Presentation of a content-aware filtering for 2.5D meshes of faces. An exemplar-based filter that corrects each point of a given mesh through local model-exemplar neighborhood comparison taking advantage of prior knowledge of the models (faces) to improve the comparison.

  • Date: 25/03/2018
  • //
  • Featured In: Eurographics 2018 Posters
  • //
  • Publication Type: Poster
  • //
  • Author(s): Leandro Dihl, Leandro Cruz, Nuno Gonçalves
  • //
  • Download File

Use of Epipolar Images Towards Outliers Extraction in Depth Images

Method for filtering the depth model, reconstructed from light field cameras, based on the removal of low confidence reconstructed values and using an inpainting method to replace them. This approach has shown good results for outliers removal.

  • Date: 26/10/2018
  • //
  • Featured In: Recpad 2018-24th Portuguese Conference on Pattern Recognition
  • //
  • Publication Type: Poster
  • //
  • Author(s): Dirce Celorico, Leandro Cruz, Leandro Dihl, Nuno Gonçalves
  • //
  • Download File

A Content-aware Filtering for RGBD Faces

A content-aware filtering for 2.5D meshes of faces that preserves their intrinsic features. We take advantage of prior knowledge of the models (faces) to improve the comparison. The model is invariant to depth translation and scale. The proposed method is evaluated on a public 3D face dataset with different levels of noise. The results show that the method is able to remove noise without smoothing the sharp features of the face.

  • Date: 25/02/2019
  • //
  • Featured In: GRAPP 2019 - International Conference on Computer Graphics Theory and Applications
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Leandro Dihl, Leandro Cruz, Nuno Monteiro, Nuno Gonçalves
  • //
  • Download File

Graphic Code: Creation, Detection and Recognition

Graphic Code is a new Machine Readable Coding (MRC) method. It creates coded images by organizing available primitive graphic units arranged according to some predefined patterns. Some of these patterns are previously associated with symbols used to compose the messages and to define a dictionary.

  • Date: 26/10/2018
  • //
  • Featured In: Recpad 2018-24th Portuguese Conference on Pattern Recognition
  • //
  • Publication Type: Poster
  • //
  • Author(s): Leandro Cruz, Bruno Patrão, Nuno Gonçalves
  • //
  • Download File

Large Scale Information Marker Coding for Augmented Reality Using Graphic Code

The main advantage of using this approach as an Augmented Reality marker is the possibility of creating generic applications that can read and decode these Graphic Code markers, which might contain 3D models and complex scenes encoded in it. Additionally, the resulting marker has strong aesthetic characteristics associated to it once it is generated from any chosen base image.

  • Date: 10/12/2018
  • //
  • Featured In: IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Leandro Cruz, Bruno Patrão, Nuno Gonçalves
  • //
  • Download File

An Augmented Reality Application Using Graphic Code Markers

Presenting applications of the Graphic Code, exploiting its large-scale information coding capabilities applied to Augmented Reality.

  • Date: 10/12/2018
  • //
  • Featured In: IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
  • //
  • Publication Type: Poster
  • //
  • Author(s): Leandro Cruz, Bruno Patrão, Nuno Gonçalves
  • //
  • Download File

Uniquemark: A computer vision system for hallmarks authentication

Uniquemark is a vision system for authentication based on random marks, particularly hallmarks. Hallmarks are worldwide used to authenticate and attest the legal fineness of precious metal artefacts. Our authentication method is based on a multiclass classifier model that uses mark descriptor composed by several geometric features of the particles.

  • Date: 26/10/2018
  • //
  • Featured In: Recpad 2018-24th Portuguese Conference on Pattern Recognition
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Ricardo Barata, Leandro Cruz, Bruno Patrão, Nuno Gonçalves
  • //
  • Download File

Improving Facial Depth Data by Exemplar-based Comparisons

Presented a filtering method for meshes of faces preserving their intrinsic features. It is based in an exemplar-based neighborhood matching where all models are in a frontal position avoiding rotation and perspective drawbacks. Moreover, the model is invariant to depth translation and scale.

  • Date: 26/10/2018
  • //
  • Featured In: Recpad 2018-24th Portuguese Conference on Pattern Recognition
  • //
  • Publication Type: Poster
  • //
  • Author(s): Leandro Dihl, Leandro Cruz, Nuno Gonçalves
  • //
  • Download File

Graphic Code: a New Machine Readable Approach

Graphic Code has two major advantages over classical MRCs: aesthetics and larger coding capacity. It opens new possibilities for several purposes such as identification, tracking (using a specific border), and transferring of content to the application. This paper focuses on presenting how graphic code can be used for industry applications, emphasizing its uses on Augmented Reality (AR).

  • Date: 10/12/2018
  • //
  • Featured In: IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Leandro Cruz, Bruno Patrão, Nuno Gonçalves
  • //
  • DOI: 10.1109/AIVR.2018.00036
  • //
  • Download File

UniqueMark - A method to create and authenticate a unique mark in precious metal artefacts

The project UniqueMark aims at creating a system to provide a precious metal artefact with a unique, unclonable and irreproducible mark, and at building a system for the validation of authenticity of it. This system verifies the mark authenticity using a microscope, or a smartphone camera with attached macro lens.

  • Date: 24/07/2019
  • //
  • Featured In: Jewellery Materials Congress 2019
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Nuno Gonçalves, Leandro Cruz
  • //
  • Download File

Deep Facial Diagnosis: Deep Transfer Learning From Face Recognition to Facial Diagnosis

The relationship between face and disease has been discussed from thousands years ago, which leads to the occurrence of facial diagnosis. The objective here is to explore the possibility of identifying diseases from uncontrolled 2D face images by deep learning techniques. In this paper, we propose using deep transfer learning from face recognition to perform the computer-aided facial diagnosis on various diseases. In the experiments, we perform the computer-aided facial diagnosis on single (beta-thalassemia) and multiple diseases (beta-thalassemia, hyperthyroidism, Down syndrome, and leprosy) with a relatively small dataset. The overall top-1 accuracy by deep transfer learning from face recognition can reach over 90% which outperforms the performance of both traditional machine learning methods and clinicians in the experiments. In practical, collecting disease-specific face images is complex, expensive and time consuming, and imposes ethical limitations due to personal data treatment. Therefore, the datasets of facial diagnosis related researches are private and generally small comparing with the ones of other machine learning application areas. The success of deep transfer learning applications in the facial diagnosis with a small dataset could provide a low-cost and noninvasive way for disease screening and detection.

  • Date: 16/06/2020
  • //
  • Featured In: IEEE Access, vol. 8, pp. 123649-123661
  • //
  • Publication Type: Journal Articles
  • //
  • Author(s): Bo Jin, Leandro Cruz, Nuno Gonçalves
  • //
  • DOI: 10.1109/ACCESS.2020.3005687
  • //
  • Download File

Biometric System for Mobile Validation of ID And Travel Documents

Current trends in security of ID and travel documents require portable and efficient validation applications that rely on biometric recognition. Such tools can allow any authority and citizen to validate documents and authenticate citizens with no need of expensive and sometimes unavailable proprietary devices. In this work, we present a novel, compact and efficient approach of validating ID and travel documents for offline mobile applications. The approach employs the in-house biometric template that is extracted from the original portrait photo (either full frontal or token frontal), and then stored on the ID document with use of a machine readable code (MRC). The ID document can then be validated with a developed application on a mobile device with digital camera. The similarity score is estimated with use of an artificial neural network (ANN). Results show that we achieve validation accuracy up to 99.5% with corresponding false match rate = 0.0047 and false non-match rate = 0.00034. (CITATION: I. Medvedev, N. Gonçalves and L. Cruz, "Biometric System for Mobile Validation of ID And Travel Documents," 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), 2020, pp. 1-5.)

  • Date: 01/10/2020
  • //
  • Featured In: 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, pp. 1-5
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Iurii Medvedev, Nuno Gonçalves, Leandro Cruz
  • //
  • Download File
  • //
  • Visit Website

Towards Facial Biometrics for ID Document Validation in Mobile Devices

Various modern security systems follow atendency to simplify the usage of the existing biometric recognition solutions and embed them into ubiquitous portable devices. In this work, we continue the investigation and development of our method for securing identification documents. The original facial biometric template, which is extracted from the trusted frontal face image, is stored on the identification document in a secured personalized machine-readable code. Such document is protected from face photo manipulation and may be validated with an offline mobile application. We apply automatic methods of compressing the developed face descriptors to make the biometric validation system more suitable for mobile applications. As an additional contribution, we introduce several print-capture datasets that may be used for training and evaluating similar systems for mobile identification and travel documents validation.

  • Date: 01/07/2021
  • //
  • Featured In: Applied Sciences
  • //
  • Publication Type: Journal Articles
  • //
  • Author(s): Iurii Medvedev, Farhad Shadmand, Leandro Cruz, Nuno Gonçalves
  • //
  • DOI: 10.3390/app11136134
  • //
  • Download File
  • //
  • Visit Website

Card3DFace—An Application to Enhance 3D Visual Validation in ID Cards and Travel Documents

The identification of a person is a natural way to gain access to information or places. A face image is an essential element of visual validation. In this paper, we present the Card3DFace application, which captures a single-shot image of a person’s face. After reconstructing the 3D model of the head, the application generates several images from different perspectives, which, when printed on a card with a layer of lenticular lenses, produce a 3D visualization effect of the face. The image acquisition is achieved with a regular consumer 3D camera, either using plenoptic, stereo or time-of-flight technologies. This procedure aims to assist and improve the human visual recognition of ID cards and travel documents through an affordable and fast process while simultaneously increasing their security level. The whole system pipeline is analyzed and detailed in this paper. The results of the experiments performed with polycarbonate ID cards show that this end-to-end system is able to produce cards with realistic 3D visualization effects for humans.

  • Date: 23/09/2021
  • //
  • Featured In: Applied Sciences
  • //
  • Publication Type: Journal Articles
  • //
  • Author(s): Dihl, L., Cruz, L., & Gonçalves, N.
  • //
  • DOI: 10.3390/app11198821
  • //
  • Download File
  • //
  • Visit Website

Pseudo RGB-D Face Recognition

In the last decade, advances and popularity of low cost RGB-D sensors have enabled us to acquire depth information of objects. Consequently, researchers began to solve face recognition problems by capturing RGB-D face images using these sensors. Until now, it is not easy to acquire the depth of human faces because of limitations imposed by privacy policies, and RGB face images are still more common. Therefore, obtaining the depth map directly from the corresponding RGB image could be helpful to improve the performance of subsequent face processing tasks such as face recognition. Intelligent creatures can use a large amount of experience to obtain three-dimensional spatial information only from two-dimensional plane scenes. It is machine learning methodology which is to solve such problems that can teach computers to generate correct answers by training. To replace the depth sensors by generated pseudo depth maps, in this paper, we propose a pseudo RGB-D face recognition framework and provide data driven ways to generate the depth maps from 2D face images. Specially, we design and implement a generative adversarial network model named “D+GAN” to perform the multi-conditional image- to-image translation with face attributes. By this means, we validate the pseudo RGB-D face recognition with experiments on various datasets. With the cooperation of image fusion technologies, especially Non-subsampled Shearlet Transform, the accuracy of face recognition has been signi cantly improved.

  • Date: 01/08/2022
  • //
  • Featured In: IEEE Sensors Journal
  • //
  • Publication Type: Journal Articles
  • //
  • Author(s): Bo Jin, Leandro Cruz and Nuno Gonçalves
  • //
  • DOI: 10.1109/JSEN.2022.3197235
  • //
  • Download File
  • //
  • Visit Website

Face depth prediction by the scene depth

Depth map, also known as range image, can directly reflect the geometric shape of the objects. Due to several issues such as cost, privacy and accessibility, face depth information is not easy to obtain. However, the spatial information of faces is very important in many aspects of computer vision especially in the biometric identification. In contrast, scene depth information is related easier to obtain with the development of autonomous driving technology in recent years. An idea of face depth estimation inspired is to bridge the gap between the scene depth and the face depth. Previously, face depth estimation and scene depth estimation were treated as two completely separate domains. This paper proposes and explores utilizing scene depth knowledge learned to estimate the depth map of faces from monocular 2D images. Through experiments, we have preliminarily verified the possibility of using scene depth knowledge to predict the depth of faces and its potential in face feature representation.

  • Date: 23/06/2021
  • //
  • Featured In: IEEE/ACIS 20th International Conference on Computer and Information Science, Shanghai, China
  • //
  • Publication Type: Conference Papers
  • //
  • Author(s): Bo Jin, Leandro Cruz, Nuno Gonc ̧alves
  • //
  • DOI: 10.1109/ICIS51600.2021.9516598
  • //
  • Download File
  • //
  • Visit Website