7Light and Color Representation in Imaging Systems

7.1 Introduction

Visual images are created by patterns of light falling on the retina of the human eye. The goal of an imaging system is to use an image sensor to capture the light from a scene, to convert the sensor response to electronic form, and to store and/or transmit this data. At the end of the chain, a display device converts the electronic data back to an optical image that should appear as similar as possible to the original scene when viewed by a human viewer, i.e. patterns and colors should be accurately reproduced. Alternatively, we may wish to accurately reproduce on the display the intent of a content producer, which may contain a mix of natural and synthetic imagery.

In the previous chapters, image signals have simply been represented as a scalar value between 0 and 1, where this scalar value can represent a level of gray ranging from black to white. In this chapter, the exact nature of the image signal value is revealed for both grayscale and color images. In the case of color images, it is shown to be a three‐dimensional vector quantity. We consider in this chapter the representation of light and color in large fixed patches. The study of color signals, i.e. images and video, is addressed in the next chapter. This chapter and the next are based on the book by the author [Dubois (2010)], which can be consulted for more details and references to the literature. We follow the notation used in that book.

7.2 ...

Get Multidimensional Signal and Color Image Processing Using Lattices now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.