A Study on Efficient Memory Utilization in Machine Learning and Memory Intensive Systems

University essay from Lunds universitet/Institutionen för elektro- och informationsteknik

Abstract: As neural networks find more and more practical applications targeted for edge devices, the implementation of energy-efficient architectures is becoming very crucial. Despite the advancements in process technology, power and performance of memories remain to be a bottleneck for most computing platforms. The aim of this thesis is to study the effect of the breakdown structure of memories on power cost with a focus on a dedicated hardware accelerator in neural network applications. The evaluation test suite of this study consists of a RISC-V based System-on-Chip (SoC), PULPissimo, integrated with an accelerator designed for a convolutional neural network (CNN) application. The memory organization of the CNN hardware accelerator is implemented as a flexible and configurable wrapper for studying different breakdown structures. Moreover, different optimization techniques are also utilized to put area and power costs in the defined design budget. Multiple memory breakdown structures, suitable for the accelerator's memory sub-system, were analyzed for power consumption, as part of the study. The study reveals a non-linear increase in power consumption with the size of the static random-access memory (SRAM) modules and the limits of memory partitioning while using SRAM. It also revealed the power and area limitations of D flip-flop based standard cell memory (SCM) in comparison with SRAM.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)