Essays about: "parallel distributed algorithm"

Showing result 1 - 5 of 19 essays containing the words parallel distributed algorithm.

  1. 1. Using MPI One-Sided Communication for Parallel Sudoku Solving

    University essay from Umeå universitet/Institutionen för datavetenskap

    Author : Henrik Aili; [2023]
    Keywords : exact cover; sudoku; parallelization; MPI;

    Abstract : This thesis investigates the scalability of parallel Sudoku solving using Donald Knuth’s Dancing Links and Algorithm X with two different MPI communication methods: MPI One-Sided Communication and MPI Send-Receive. The study compares the performance of the two communication approaches and finds that MPI One-Sided Communication exhibits better scalability in terms of speedup and efficiency. READ MORE

  2. 2. Decentralized Learning over Wireless Networks with Imperfect and Constrained Communication : To broadcast, or not to broadcast, that is the question!

    University essay from Linköpings universitet/Kommunikationssystem

    Author : Martin Dahl; [2023]
    Keywords : Decentralized Stochastic Gradient Descent; Decentralized Learning; Medium Access Control; Wireless Communications; Machine Learning; Imperfect Communication; Resource-Constrained; Resource Allocation; Scheduling;

    Abstract : The ever-expanding volume of data generated by network devices such as smartphones, personal computers, and sensors has significantly contributed to the remarkable advancements in artificial intelligence (AI) and machine learning (ML) algorithms. However, effectively processing and learning from this extensive data usually requires substantial computational capabilities centralized in a server. READ MORE

  3. 3. Real-time remote processing enabled by high speed Ethernet

    University essay from Lunds universitet/Institutionen för elektro- och informationsteknik

    Author : Dumitra Iancu; Lina Tinnerberg; [2023]
    Keywords : Large Intelligent Surfaces; Deep Neural Networks; FPGA; VHDL; Ethernet; Technology and Engineering;

    Abstract : A growing trend within the technologies acting as enablers for 6g, such as Massive MIMO and Large Intelligence Surfaces, is benefiting from both the communication and the positioning aspects that they can provide. As these kinds of systems are employing a large number of arrays which provide high amounts of data, a distributed hardware approach having near-antenna processing is explored in this work. READ MORE

  4. 4. Performance Analysis of Distributed Spatial Interpolation for Air Quality Data

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Albert Asratyan; [2021]
    Keywords : Distributed Computing; Parallel Execution; Data Interpolation; Kriging; Apache Ray; Geostatistics; Python; Cloud Services; AWS; Air Quality; Distribuerad Databehandling; Parallell Körning; Datainterpolation; Kriging; Apache Ray; Geostatistik; Python; Molntjänster; AWS; Luftkvalitet;

    Abstract : Deteriorating air quality is a growing concern that has been linked to many health- related issues. Its monitoring is a good first step to understanding the problem. However, it is not always possible to collect air quality data from every location. READ MORE

  5. 5. A scalable species-based genetic algorithm for reinforcement learning

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Anirudh Seth; [2021]
    Keywords : neuroevolution; model encoding; distributed speciation; reinforcement learning; genetic algorithms; evolutionary computing; neuroevolution; model encoding; förstärkningsinlärning; genetiska algoritmer; evolutionär databehandling;

    Abstract : Existing methods in Reinforcement Learning (RL) that rely on gradient estimates suffer from the slow rate of convergence, poor sample efficiency, and computationally expensive training, especially when dealing with complex real-world problems with a sizable dimensionality of the state and action space. In this work, we attempt to leverage the benefits of evolutionary computation as a competitive, scalable, and gradient-free alternative to training deep neural networks for RL-specific problems. READ MORE