Online Learning with Sample Selection
Abstract: In data-driven network and systems engineering, we often train models offline using measurement data collected from networks. Offline learning achieves good results but has drawbacks. For example, model training incurs a high computational cost and the training process takes a long time. In this project, we follow an online approach for model training. The approach involves a cache of fixed size to store measurement samples and recomputation of ML models based on the current cache. Key to this approach are sample selection algorithms that decide which samples are stored in the cache and which are evicted. We implement three sample selection methods in this project: reservoir sampling, maximum entropy sampling and maximum coverage sampling. In the context of sample selection methods, we evaluate model recomputation methods to control when to retrain the model using the samples in the current cache and use the retrained model to predict the following samples before the next recomputation moment. We compare three recomputation strategies: no recomputation, periodic recomputation and recomputation using the ADWIN algorithm. We evaluate the three sample selection methods on five datasets. One of them is the FedCSIS 2020 Challenge dataset and the other four are KTH testbed datasets. We find that maximum entropy sampling can achieve quite good performance compared to other sample selection methods and that recomputation using the ADWIN algorithm can help reduce the number of recomputations and does not affect the prediction performance.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)