LiDAR novel view synthesis (NVS) has emerged as a novel task within LiDAR simulation, offering valuable simulated point cloud data from novel viewpoints to aid in autonomous driving systems. However, existing LiDAR NVS methods typically rely on neural radiance fields (NeRF) as their 3D representation, which incurs significant computational costs in both training and rendering. Moreover, NeRF and its variants are designed for symmetrical scenes, making them ill-suited for driving scenarios. To address these challenges, we propose GS-LiDAR, a novel framework for generating realistic LiDAR point clouds with panoramic Gaussian splatting. Our approach employs 2D Gaussian primitives with periodic vibration properties, allowing for precise geometric reconstruction of both static and dynamic elements in driving scenarios. We further introduce a novel panoramic rendering technique with explicit ray-splat intersection, guided by panoramic LiDAR supervision. By incorporating intensity and ray-drop spherical harmonic (SH) coefficients into the Gaussian primitives, we enhance the realism of the rendered point clouds. Extensive experiments on KITTI-360 and nuScenes demonstrate the superiority of our method in terms of quantitative metrics, visual quality, as well as training and rendering efficiency.
LiDAR新视角合成(NVS)作为LiDAR仿真中的一个新兴任务,提供了来自新视角的宝贵模拟点云数据,有助于自动驾驶系统的开发。然而,现有的LiDAR NVS方法通常依赖于神经辐射场(NeRF)作为其3D表示,这在训练和渲染过程中都需要消耗大量计算资源。此外,NeRF及其变体通常是为对称场景设计的,因此不太适合用于驾驶场景。为了解决这些问题,我们提出了GS-LiDAR,一种基于全景高斯点云溅射生成逼真LiDAR点云的新框架。我们的方法采用具有周期性振动特性的2D高斯原语,能够精确地重建驾驶场景中的静态和动态元素的几何形状。我们进一步提出了一种新的全景渲染技术,通过显式的光线-溅射交点,并结合全景LiDAR监督来引导渲染过程。通过将强度和光线丢失的球谐(SH)系数融入高斯原语中,我们增强了渲染点云的真实感。通过在KITTI-360和nuScenes上的大量实验,验证了我们方法在定量指标、视觉质量以及训练和渲染效率方面的优越性。