Introduction

Takuya FUNATOMI1 and Takahiro OKABE2

1Division of Information Science, Nara Institute of Science and Technology, Japan

2Department of Artificial Intelligence, Kyushu Institute of Technology, Fukuoka, Japan

In our physical world, light propagates from various positions towards various directions. Light is emitted from light sources such as the sun and lamps and reflected by surfaces such as walls and glasses. The amount of light flowing in every direction through every point is described as a light field. As readers know, light is an electro-magnetic wave that oscillates perpendicular to its traveling direction. Therefore, light is characterized both by the spatial period of oscillation, i.e. the wavelength, and by the direction of oscillation, i.e. the polarization state. Consequently, the light field is a high-dimensional function with respect to time t, position (x, y, z), direction (θ.ϕ), wavelength λ and polarization state s of light.

Most cameras are inherently designed to mimic the human eye by having three channels of red, green and blue (RGB) color and achieving about 30 frames per second. Therefore, conventional cameras only capture a part of the modalities of a light field with limited spatial, temporal and spectral resolution. Some cameras are designed to capture other modalities, for example spectra from near UV to near IR rather than RGB, polarimetry and time of light travels. Such modalities are difficult to perceive, but provide much information about ...

Get Computational Imaging for Scene Understanding now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.