GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces

*Corresponding author

1ShanghaiTech University, 2The University of Hong Kong, 3Tencent America, 4Texas A&M University

GaussianShader enables free-viewpoint rendering objects under distinct lighting environments.

Abstract

The advent of neural 3D Gaussians has recently brought about a revolution in the field of neural rendering, facilitating the generation of high-quality renderings at real-time speeds. However, the explicit and discrete representation encounters challenges when applied to scenes featuring reflective surfaces.

In this paper, we present GaussianShader, a novel method that applies a simplified shading function on 3D Gaussians to enhance the neural rendering in scenes with reflective surfaces while preserving the training and rendering efficiency. The main challenge in applying the shading function lies in the accurate normal estimation on discrete 3D Gaussians. Specifically, we proposed a novel normal estimation framework based on the shortest axis directions of 3D Gaussians with a delicately designed loss to make the consistency between the normals and the geometries of Gaussian spheres.

Experiments show that GaussianShader strikes a commendable balance between efficiency and visual quality. Our method surpasses Gaussian Splatting in PSNR on specular object datasets, exhibiting an improvement of 1.57dB. When compared to prior works handling reflective surfaces, such as Ref-NeRF, our optimization time is significantly accelerated (23h vs. 0.58h).

GaussianShader maintains real-time rendering speed and renders high-fidelity images for both general and reflective surfaces. Ref-NeRF and ENVIDR attempt to handle reflective surfaces, but they suffer from quite time-consuming optimization and slow rendering speed. 3D Gaussian splatting keeps high efficiency but cannot handle such reflective surfaces.

Pipeline

GaussianShader initiates with the neural 3D Gaussian spheres that integrate both conventional attributes and the newly introduced shading attributes to accurately capture view-dependent appearances. We incorporate a differentiable environment lighting map to simulate realistic lighting. The end-to-end training leads to a model that reconstructs both reflective and diffuse surfaces, achieving high material and lighting fidelity.

Reconstruction Results