Neural Rendering for High-Fidelity Product Visualisation
Abstract
High-fidelity product visualisation typically requires significant manual preparation of CAD data, including geometry refinement, topology optimisation, and material baking, to enable interactive real-time applications. Although these steps improve performance and maintain acceptable visual quality, they cannot match the fidelity of the original assets, which are typically rendered using offline path tracers or in real time on modern, high-performance GPU hardware.
This works investigates whether neural rendering techniques can bypass much of this manual processing by directly leveraging high-quality 3D rendered output. Specifically, it explores the application of Gaussian splatting, a recent advancement building on neural radiance fields (NeRF), for interactive product visualisation.
Our experimental results show that high PSNR and structural similarity index (SSIM) scores can be achieved without traditional geometry simplification or material baking, provided that training imagery closely matches the intended viewing conditions. Diffuse dielectrics (matte paint, stone, wood, plastic) were reconstructed effectively, thanks to their broad, forgiving reflectance. Metals (conductors) require accurate specular response and were more sensitive. Transparent dielectrics (glass, water, acrylic) and translucent/subsurface materials (frosted glass, skin, wax) posed ongoing challenges, as their light transport involves refraction, transmission, and internal scattering that are not captured well by splat-based methods.
This work shows that, with targeted optimisation, Gaussian splatting can deliver convincing product visualisations while cutting manual preparation, highlighting both its promise and current limits for AI-assisted workflows.