Single Depth-Image 3D Reflection Symmetry
and Shape Prediction

ICCV 2023

We present Iterative Symmetry Completion Network (ISCNet), a single depth-image shape completion method that exploits reflective symmetry cues to obtain more detailed shapes. The effectiveness of single depth-image shape completion methods is often sensitive to the accuracy of the symmetry plane. ISCNet jointly estimates the symmetry plane and shape completion iteratively; more complete shapes contribute to more robust symmetry plane estimates and vice versa. Furthermore, our shape completion method operates in the image domain, enabling more efficient high-resolution, detailed geometry reconstruction. We perform the shape completion from pairs of viewpoints, reflected across the symmetry plane, predicted by a reinforcement learning agent to improve robustness and to simultaneously explicitly leverage symmetry. The right figure shows ISCNet more efficiently leverages symmetry cues for robust shape completion from a single depth image compared to prior methods.

Paper

Zhaoxuan Zhang, Bo Dong, Tong Li, Felix Heide, Pieter Peers, Baocai Yin, Xin Yang

Single Depth-image 3D Reflection Symmetry and Shape Prediction

ICCV 2023

Iterative Symmetry Completion Network (ISCNet)

Our Iterative Symmetry Completion Network (ISCNet) takes a single-view depth map of a target object as input and iteratively completes the $3D$ mesh by alternating between refining the symmetry plane, the virtual reflection viewpoints, and the shape completion. To bootstrap the iterative process, we first project the input single-view depth map to an incomplete point cloud that is subsequently passed to a symmetry detection module and point cloud transposition operator. The symmetry detection module provides an estimate of the object’s symmetry plane that is subsequently leveraged by the point cloud transposition operator to align the point cloud symmetrically around the estimated symmetry plane. Next, a novel $3D$ shape completion module progressively completes the $3D$ point cloud based on a pair of reflection viewpoints selected by an RL agent. For each reflection viewpoint pair, a depth and normal map pair is extracted from the partial point cloud. Due to the partial completion of the point cloud, the extracted depth and normal map pairs exhibit holes. To address this issue, we employ a specifically crafted inpainting module, which not only repairs these holes but also assigns a confidence score to each repaired pixel. Next, the inpainted depth and normal maps are merged into the point cloud guided by their associated confidence scores. Finally, to improve robustness, we iteratively refine the symmetry plane, the reflection viewpoints, and the shape completion. We limit the number of iterations to $20$ and only update the symmetry plane and reset the point cloud every $4$th iteration to balance robustness with efficiency.

Qualitative Comparisons

We compare the ISCNet predictions to those of the three top performing existing methods PoinTr, SnowFlake, Front2Back. All competing methods are retrained on ShapeNet and tested with identical exemplars under the same conditions. Visually, ISCNet provides the ‘cleanest’ shape completion for all shown tasks. Although our method is designed for symmetric objects, we also obtain good reconstruction quality on non-symmetric objects, such as the sofa and table in ScanNet dataset.

 

Related Publications

[1] Gene Chou, Ilya Chugunov, Felix Heide GenSDF: Two-Stage Learning of GeneralizableSigned Distance Functions, NeurIPS 2022

[2] Gene Chou, Yuval Bahat, Felix Heide Diffusion-SDF: Conditional Generative Modeling ofSigned Distance Functions, ICCV 2023