The Latency Effects of Utilizing a Microservice Architecture in a Time-Critical System

University essay from Linköpings universitet/Institutionen för datavetenskap

Abstract: This study aims to examine the effects of transforming a monolithic server system into a microservice architecture, focusing on the increased latency introduced by using a microservice orchestrator. The microservice orchestrator was implemented using an OpenFlow switch controlled by the Beacon and Ryu OpenFlow controllers. These controllers, along with the round robin, random assign and a server-aware load balancing algorithm, were all compared in order to find the combination resulting in the lowest latency and highest achieved server balance in varying network environments. We show that the OpenFlow switch enforces a client-aware load balancing policy and that only the initial request is handled by the controller, effectively reducing the importance of choosing the optimal OpenFlow controller. In addition, the round robin load balancer was preferred when dealing with homogeneous requests, and a server-aware load balancer was required for heterogeneous requests. For most requests, the system would only slow down by a few microseconds using the proposed architecture. However, for 0.001\% of all requests, the slowdown was much more significant, with each of those requests being at least 100 times slower than when using a monolithic server architecture.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)