Skip to Main Content
Making Things See
book

Making Things See

by Greg Borenstein
January 2012
Beginner to intermediate content levelBeginner to intermediate
440 pages
17h 11m
English
Make: Community
Content preview from Making Things See

Background Removal, User Pixels, and the Scene Map

Up to this point in the chapter, we’ve been working with the skeleton data exclusively. However, there are some things that OpenNI can do without the full calibration process. These techniques aren’t as universally useful as joint tracking, but they are handy to have in your toolkit while working with the Kinect. These alternate techniques break down into two categories: those that work with pixels and those that work with gestures. We’ll get into gestures in the next section, but first we’re going to look at what OpenNI lets us do with pixels.

In this section, we’re going to learn how to combine OpenNI’s user-tracking with the depth and RGB images. Think back to our discussion of the calibration ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Reinventing the Organization for GenAI and LLMs

Reinventing the Organization for GenAI and LLMs

Ethan Mollick
Make: Volume 79

Make: Volume 79

Mike Senese
Making Things Smart

Making Things Smart

Gordon F. Williams

Publisher Resources

ISBN: 9781449321918Errata