7
Exploring Controllable Neural Feature Fields
In the previous chapter, you learned how to represent a 3D scene using Neural Radiance Fields (NeRF). We trained a single neural network on posed multi-view images of a 3D scene to learn an implicit representation of it. Then, we used the NeRF model to render the 3D scene from various other viewpoints and viewing angles. With this model, we assumed that the objects and the background are unchanging.
But it is fair to wonder whether it is possible to generate variations of the 3D scene. Can we control the number of objects, their poses, and the scene background? Can we learn about the 3D nature of things without posed images and without understanding the camera parameters?
By the end of this chapter, ...
Get 3D Deep Learning with Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.