14/06/2020

Single-View View Synthesis With Multiplane Images

Richard Tucker, Noah Snavely

Keywords: view synthesis, monocular, multiplane image, image-based rendering, 3d deep learning, scale invariance

Abstract: A recent strand of work in view synthesis uses deep learning to generate multiplane imagesa camera-centric, layered 3D representationgiven two or more input images at known viewpoints. We apply this representation to single-view view synthesis, a problem which is more challenging but has potentially much wider application. Our method learns to predict a multiplane image directly from a single image input, and we introduce scale-invariant view synthesis for supervision, enabling us to train on online video. We show this approach is applicable to several different datasets, that it additionally generates reasonable depth maps, and that it learns to fill in content behind the edges of foreground objects in background layers.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CVPR 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers