22/11/2021

Ray-ONet: Efficient 3D Reconstruction From A Single RGB Image

Wenjing Bian, Zirui Wang, Kejie Li, Victor Adrian Prisacariu

Keywords: 3D reconstruction, shape representation, single-view reconstruction, occupancy networks, 3D deep learning

Abstract: We propose Ray-ONet to reconstruct detailed 3D models from monocular images efficiently. By predicting a series of occupancy probabilities along a ray that is back-projected from a pixel in the camera coordinate, our method Ray-ONet improves the reconstruction accuracy in comparison with Occupancy Networks (ONet), while reducing the network inference complexity to O(N^2). As a result, Ray-ONet achieves state-of-the-art performance on the ShapeNet benchmark with more than 20×speed-up at 128^3 resolution and maintains a similar memory footprint during inference.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers