Chapter 25. Optical Tracking
In previous chapters we discussed how accelerometers have changed the way that people interact with video games. The same sort of innovation is occurring with optical sensors. Cameras, both in visual and infrared spectrums, are being used to generate input for games. This chapter will focus on the Microsoft Kinect for Windows SDK and give an overview of how to make a simple game that combines optical tracking with physics. First we’ll give a short introduction on the technologies these systems use to turn a camera into a tracking device.
Without getting too detailed, we should start by discussing a few things about digital cameras. First, most of us are familiar with the “megapixel” metric used to describe digital cameras. This number is a measure of how many pixels of information the camera records in a single frame. It is equal to the height of the frame in pixels multiplied by the width of the frame in pixels. A pixel, or picture element, contains information on intensity, color, and the location of the pixel relative to some origin. The amount of information depends on the bits per pixel and corresponds to the amount of color variation a particular pixel can display. Perhaps you’ve seen your graphics set to 16-bit or 24-bit modes. This describes how many colors a particular pixel can display. A 24-bit pixel can be one of 16.8 million different colors at any instant. It is commonly held that the human eye can differentiate among about 10 million colors; ...
Get Physics for Game Developers, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.