Investigating the Spectral Bias in Neural Networks
Abstract: Neural networks have been shown to have astounding performance on a variety of different machine learning tasks and data sets, both for synthetic and real-world data.However, in spite of their widespread usage and implementation, the convergence and the training dynamics of neural networks are neither trivial, nor completely understood. This project regards investigating what some researchers refer to as the Spectral Bias of neural networks. Neural networks have been seen during training to initially fit to data of lower complexity rather than high. That is, the network learns features of the target that in the Fourier domain corresponds to lower frequencies first, before it learns features that correspond to high frequencies. In this thesis, a quantitative way of measuring this bias is proposed, and empirical experiments are able to show the prevalence of the spectral bias with respect to this measure. The experiments compare how different network parameters, architectures, and optimizers affect the network's ability to find high frequency content during training. Both tailored experiments with synthetic target functions, and real-world data are considered. The machine learning problems investigated in this report are low dimensional regression problems. The real-world problem is natural image regression, and is performed on the DIV2K data set used in the NTIRE challenge on Single Image Super Resolution (SISR). The proposed measure shows that there exists a spectral bias in this task as well, indicating that it does not only occur in simulated data and controlled experiments, but also in data from real-world applications.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)