Using NeRF- and Mesh-Based Methods to Improve Visualisation of Point Clouds

University essay from Lunds universitet/Matematik LTH

Abstract: In recent years, the field of generating synthetic images from novel view points has seen some major improvements. Most importantly with the publication of Neural Radiance Fields allowing for extremely detailed and accurate 3D novel views. Usage of LiDAR products to collect actual depth data has also seen an increase as it is immensely useful for achieving high resolution 3D mapping of a space. However, these point clouds can be hard to read as they give a discrete sample of surfaces and lack colour and texture. In this thesis we explore various ways of improving visualisation and human understanding of scenes and objects generated from a stationary camera-LiDAR pair. We do this by first isolating individual rigid moving objects ina scene and constructing denser point clouds of these objects by projecting them on the camera video and aggregating over time. By utilising the novel view synthesis method Point-NeRF, we also improve visualisation of these dense point clouds further. This is done by training a point-based neural network on the aggregated point clouds and the corresponding video frames. Lastly two methods for surface reconstruction of objects and the backgrounds are tested. With this we achieve accurate and understandable renders of a variety of vehicles. We believe that with a well calibrated camera this method shows significant promise for reconstructing scenes in 3D in post-processing well.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)