Chapter 15

Augmented and/or Mixed Reality

15.1. Introduction

The term “augmented reality” (AR) appeared in the early 1990s. The aim of AR is to increase user perception by adding information, such as sound, textual notations or virtual objects to a perceived scene. By its very nature, augmented reality is interactive and three-dimensional (3D), meaning that at any time, added elements must be correctly placed in relation to the real world as seen by the user. This concept has numerous applications [AZU 01] in fields such as medical imaging, maintenance assistance, collaborative working, architecture, cultural heritage and gaming. From a practical perspective, scene visualization is carried out using a specific helmet or goggles, or, more simply, using a portable device (telephone, tablet computer, etc.).

To achieve coherent integration, virtual objects must be rendered at any given time using the viewpoint or pose of the portable camera carried by the user. Reliable, real-time position calculation methods are therefore needed. Moreover, a model of the scene is essential, first because most pose computation methods are based on image–model mapping, and second in order to generate interactions between real-world and virtual objects, such as occlusion and mutual shadowing. Point cloud models are used for pose computation, but the management of interactions between real and virtual elements requires the use of surface models.

In this chapter, we will discuss the state of the art of ...

Get 3D Video: From Capture to Diffusion now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.