Essays about: "Arithmetic Computation"

Showing result 1 - 5 of 9 essays containing the words Arithmetic Computation.

  1. 1. EMONAS : Evolutionary Multi-objective Neuron Architecture Search of Deep Neural Network

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Jiayi Feng; [2023]
    Keywords : DNN Deep Neural Network ; NAS Neural Architecture Search ; EA Evolutionary Algorithm ; Multi-Objective Optimization; Binary One Optimization; Embedded Systems; DNN Deep Neural Network ; NAS Neural Architecture Search ; EA Evolutionary Algorithm ; Multi-Objective Optimization; Binary One Optimization; Inbyggda system;

    Abstract : Customized Deep Neural Network (DNN) accelerators have been increasingly popular in various applications, from autonomous driving and natural language processing to healthcare and finance, etc. However, deploying them directly on embedded system peripherals within real-time operating systems (RTOS) is not easy due to the paradox of the complexity of DNNs and the simplicity of embedded system devices. READ MORE

  2. 2. Leveraging Posits for the Conjugate Gradient Linear Solver on an Application-Level RISC-V Core

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : David Mallasén Quintana; [2022]
    Keywords : Computer Arithmetic; Conjugate Gradient; Posit; IEEE-754; Floating Point; High-Performance Computing; RISC-V; Datoraritmetik; Konjugerad Gradient; Posit; IEEE-754; Flytande Punkt; Högpresterande Beräkningar; RISC-V;

    Abstract : Emerging floating-point arithmetics provide a way to optimize the execution of computationally-intensive algorithms. This is the case with scientific computational kernels such as the Conjugate Gradient (CG) linear solver. Exploring new arithmetics is of paramount importance to maximize the accuracy and timing performance of these algorithms. READ MORE

  3. 3. Representation and Efficient Computation of Sparse Matrix for Neural Networks in Customized Hardware

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Lihao Yan; [2022]
    Keywords : Convolutional neural networks; Sparse matrix representation; Model compression; Algorithm-hardware co-design; AlexNet; Konvolutionella neurala nätverk; Sparsam matrisrepresentation; Modellkomprimering; Algoritm-hårdvarusamdesign; AlexNet;

    Abstract : Deep Neural Networks are widely applied to various kinds of fields nowadays. However, hundreds of thousands of neurons in each layer result in intensive memory storage requirement and a massive number of operations, making it difficult to employ deep neural networks on mobile devices where the hardware resources are limited. READ MORE

  4. 4. Effect of distance measures and feature representations on distance-based accessibility measures

    University essay from Lunds universitet/Institutionen för naturgeografi och ekosystemvetenskap

    Author : Jeremy Azzopardi; [2018]
    Keywords : geography; GIS; Bland-Altman; Euclidean distance; geographic accessibility; network distance; recreational area; Earth and Environmental Sciences;

    Abstract : Distance-based accessibility measures are often built using vector representations of origin and destination features, and Euclidean or network-based distances. There are few comparisons of how the choice of feature representations and distance types affects results. Existing comparisons often use Spearman’s rank correlation coefficients. READ MORE

  5. 5. Hardware Architectures for the Inverse Square Root and the Inverse Functions using Harmonized Parabolic Synthesis

    University essay from Lunds universitet/Institutionen för elektro- och informationsteknik

    Author : Niclas Thuning; Tor Leo Bärring; [2016]
    Keywords : Approximation; Parabolic Synthesis; Newton-Raphson Metod; Inverse Square Root; Unary Functions; Elementary Functions; Second-Degree Interpolation; Arithmetic Computation; VLSI; ASIC; Technology and Engineering;

    Abstract : This thesis presents a comparison between implementations of the inverse square root function, using two approximation algorithms; Harmonized Parabolic Synthesis and the Newton-Raphson Method. The input is a 15 bit fixed-point number, the range of which is selected so that the implementation is suitable for use as a block implementing the inverse square root for floating-point numbers, and the designs are constrained by the error, which must be < 2^(-15). READ MORE