Towards time predictable and efficient cache management in multi-threaded systems
Abstract: Once the cache memory was introduced in computer systems, the well-known gap in speeds between the memory and the CPU was reduced. However, various issues can occur within the cache, which has a significant impact on the performance and timing-predictability of an application. This thesis investigates one such issue, which is a cache contention. Most commonly, this problem can be detected inside of multicore architecture, but also can be present within all systems that use a scheduler with multiple threads. In this thesis, we show a scenario where the cache contention occurs locally in the L1 data cache on a single-core, multi-threaded system. In this way, we will be able to examine the impact of local cache contention on system performance and timing-predictability. We furthermore mitigate cache contention through a way-based partitioning technique, where we propose a way to avoid cache contention, while still maintaining reasonable overall performance. Our results show that way-partitioning offers inter-thread isolation whilst showing a slight performance drop
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)