How Far Can We Compress Instant-NGP-Based NeRF?

CVPR 2024
Shanghai Jiao Tong University, Monash University
Teaser image.

We introduce context models to substantially compress feature embeddings, with the three key technical components: Level-wise Context Models, Dimension-wise Context Models and Hash Fusion.


Abstract

To the best of our knowledge, we are the first to construct and exploit context models for NeRF compression, achieve a size reduction of 100X and 70X with improved fidelity against the baseline Instant-NGP on Synthesic-NeRF and Tanks and Temples datasets, respectively. Additionally, we attain 86.7% and 82.3% storage size reduction against the SOTA NeRF compression method BiRF.

In recent years, Neural Radiance Field (NeRF) has demonstrated remarkable capabilities in representing 3D scenes. To expedite the rendering process, learnable explicit representations have been introduced for combination with implicit NeRF representation, which however results in a large storage space requirement.

In this paper, we introduce the Context-based NeRF Compression (CNC) framework, which leverages highly efficient context models to provide a storage-friendly NeRF representation. Specifically, we excavate both level-wise and dimension-wise context dependencies to enable probability prediction for information entropy reduction. Additionally, we exploit hash collision and occupancy grids as strong prior knowledge for better context modeling.

Main Method

Overview of the proposed level-wise and dimension-wise context models. Context models are designed to estimate parameters from previous levels or higher dimensions, and the Bernoulli distribution is then utilized to estimate their bit consumptions in a differentiable manner. For the hash fusion module, please refer to the paper for more details.

Main Performance

Experiments are conducted on a synthetic dataset Synthetic-NeRF and a real-world large-scale dataset Tanks and Temples.

Visual Comparisons

Ours
Instant-NGP [MÜLLER 2022]
Ours
Instant-NGP [MÜLLER 2022]
Ours
Instant-NGP [MÜLLER 2022]
Ours
Instant-NGP [MÜLLER 2022]

BibTeX


      @inproceedings{cnc2024,
        title={How Far Can We Compress Instant-NGP-Based NeRF?},
        author={Chen, Yihang and Wu, Qianyi and Harandi, Mehrtash and Cai, Jianfei},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        year={2024}
      }