Scene Reconstruction From 4D Radar Data with GAN and Diffusion : A Hybrid Method Combining GAN and Diffusion for Generating Video Frames from 4D Radar Data

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: 4D Imaging Radar is increasingly becoming a critical component in various industries due to beamforming technology and hardware advancements. However, it does not replace visual data in the form of 2D images captured by an RGB camera. Instead, 4D radar point clouds are a complementary data source that captures spatial information and velocity in a Doppler dimension that cannot be easily captured by a camera's view alone. Some discriminative features of the scene captured by the two sensors are hypothesized to have a shared representation. Therefore, a more interpretable visualization of the radar output can be obtained by learning a mapping from the empirical distribution of the radar to the distribution of images captured by the camera. To this end, the application of deep generative models to generate images conditioned on 4D radar data is explored. Two approaches that have become state-of-the-art in recent years are tested, generative adversarial networks and diffusion models. They are compared qualitatively through visual inspection and by two quantitative metrics: mean squared error and object detection count. It is found that it is easier to control the generative adversarial network's generative process through conditioning than in a diffusion process. In contrast, the diffusion model produces samples of higher quality and is more stable to train. Furthermore, their combination results in a hybrid sampling method, achieving the best results while simultaneously speeding up the diffusion process.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)