Type of Publication

Thesis

Date:

9 /

2014

Status

Published

Augmented Reality using Non-Central Catadioptric Imaging Devices

Featured in:

MD Thesis

Authors:

Tiago Dias

Abstract

In this dissertation it is proposed a framework for the use of augmented reality using non-central catadioptric imaging devices. Our system is composed by a non-central catadioptric imagingdevice formed by a perspective camera and a spherical mirror mounted on a Pioneer 3-DX robot. In this dissertation, our main goal is considering a 3D virtual object in the world, with known 3D coordinates, make the projection of this 3D virtual object into the 2D image plane of a non-central catadioptric imaging device. Our framework presents a solution which allows us to project texturized objects (with detailed images or single color textures) to the image in realtime, up to 20 fps (using a laptop), depending on the 3D object that will be projected. When dealing with an implementation of augmented reality, some important issues must be considered, such as: projection of 3D virtual objects to the 2D image plane, occlusions, illumination and shading. To the best of our knowledge this is the first time that this problem is addressed (all state-of-the-art methods are derived for central camera systems). Thus, since this is an unexplored subject for non-central systems, some reformulations and implementations of algorithms and metholodgies must be done to solve our problems. To make clear our approach a pipeline was made, composed by two stages: pre-processing and realtime. Each one of these stages have a sequence of steps, that must be done to preverse correct operation of our framework. The pre-processing stage contains three steps: camera calibration, 3D object triangulation and object texturization. The realtime stage is also composed by three steps: “QI projection”, occlusions and illumination. To test the robustness of our framework three distinct 3D virtual objects were used. The 3D objects used were: a parallelepiped and the Stanford bunny and the happy Buddha from the stanford repository. For each object several tests were made: using a positional light (at the top of a static robot or in an arbitrary position) pointing to the 3D object; using a moving light source using two different movements (not at the same time); using three light sources, with different colors, with different movements associated at each one pointing to the object, and using a light source, positioned at the top of the robot, in a moving robot where the source is pointing to the 3D virtual object position.

Citation
Tiago Dias (2014), Augmented Reality using Non-Central Catadioptric Imaging Devices. MD thesis. University of Coimbra, 2014.

Related Content

Researcher Coordinator, VIS TEAM Leader
No tagged content to show
No tagged content to show
No tagged content to show

RECENT PUBLICATIONS

StylePuncher: encoding a hidden QR code into images

Authors: Farhad Shadmand; Luiz Schirmer; Nuno Gonçalves
Featured in: 14th International Conference on Pattern Recognition Applications and Methods (ICPRAM'25)

RiemStega: Covariance-based loss for print-proof transmission of data in images

Authors: Aniana Cruz; Guilherme Schardong; Luiz Schirmer; João Marcos, Farhad Shadmand; Nuno Gonçalves
Featured in: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025

MorFacing: A Benchmark for Estimation Face Recognition Robustness to Face Morphing Attacks

Authors: Iurii Medvedev and Nuno Gonçalves
Featured in: IEEE International Joint Conference on Biometrics (IJCB 2024)

suggested news

Nuno Gonçalves debates AI impact at "Café com...
Nuno Gonçalves participates in the conference-debate "IA -...
Laser engraving of precious metal artifacts (UniqueMark® deterministic...

RECENT PROJECTS

FACING2 – Face Image Understanding
VISUAL-ID – Unique Visual Identities in Graphics, Images and Faces
UniqueMark

Institute of Systems and Robotics Department of Electrical and Computers Engineering University of Coimbra