Essays about: "stochastic gradient descent"

Showing result 1 - 5 of 28 essays containing the words stochastic gradient descent.

  1. 1. Variational AutoEncoders and Differential Privacy : balancing data synthesis and privacy constraints

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Baptiste Bremond; [2024]
    Keywords : TVAE; Differential privacy; Tabular data; Synthetic data; DP-SGD; TVAE; differentiell integritet; tabelldata; syntetiska data; DP-SGD;

    Abstract : This thesis investigates the effectiveness of Tabular Variational Auto Encoders (TVAEs) in generating high-quality synthetic tabular data and assesses their compliance with differential privacy principles. The study shows that while TVAEs are better than VAEs at generating synthetic data that faithfully reproduces the distribution of real data as measured by the Synthetic Data Vault (SDV) metrics, the latter does not guarantee that the synthetic data is up to the task in practical industrial applications. READ MORE

  2. 2. Stochastic Frank-Wolfe Algorithm : Uniform Sampling Without Replacement

    University essay from Umeå universitet/Institutionen för matematik och matematisk statistik

    Author : Olof Håkman; [2023]
    Keywords : Stochastic Frank-Wolfe; Stochastic optimization; Sampling without replacement;

    Abstract : The Frank-Wolfe (FW) optimization algorithm, due to its projection free property, has gained popularity in recent years with typical application within the field of machine learning. In the stochastic setting, it is still relatively understudied in comparison to the more expensive projected method of Stochastic Gradient Descent (SGD). READ MORE

  3. 3. Characterization and Stabilization of Transverse Spatial Modes of Light in Few-Mode Optical Fibers

    University essay from Linköpings universitet/Informationskodning

    Author : Oscar Pihl; [2023]
    Keywords : spatial modes; Space division multiplexing; SDM; superpositions; LP-modes; few-mode fibers; quantum communication; quantum optics; adaptive optics; stochastic parallel gradient descent; SPGD; mode control; QKD; polarization controller; paddle controller; QRNG; quantum random number generator; spatial modes; perturbation effects;

    Abstract : With the growing need for secure and high-capacity communications, innovative solutions are needed to meet the demands of tomorrow. One such innovation is to make use of the still unutilized spatial dimension of light in communications, which has promising applications in both enabling higher data traffic as well as the security protocols of the future in quantum communications. READ MORE

  4. 4. Decentralized Learning over Wireless Networks with Imperfect and Constrained Communication : To broadcast, or not to broadcast, that is the question!

    University essay from Linköpings universitet/Kommunikationssystem

    Author : Martin Dahl; [2023]
    Keywords : Decentralized Stochastic Gradient Descent; Decentralized Learning; Medium Access Control; Wireless Communications; Machine Learning; Imperfect Communication; Resource-Constrained; Resource Allocation; Scheduling;

    Abstract : The ever-expanding volume of data generated by network devices such as smartphones, personal computers, and sensors has significantly contributed to the remarkable advancements in artificial intelligence (AI) and machine learning (ML) algorithms. However, effectively processing and learning from this extensive data usually requires substantial computational capabilities centralized in a server. READ MORE

  5. 5. On the Modelling of Stochastic Gradient Descent with Stochastic Differential Equations

    University essay from Uppsala universitet/Analys och partiella differentialekvationer

    Author : Martin Leino; [2023]
    Keywords : stochastic gradient descent; stochastic differential equations; statistical machine learning;

    Abstract : Stochastic gradient descent (SGD) is arguably the most important algorithm used in optimization problems for large-scale machine learning. Its behaviour has been studied extensively from the viewpoint of mathematical analysis and probability theory; it is widely held that in the limit where the learning rate in the algorithm tends to zero, a specific stochastic differential equation becomes an adequate model of the dynamics of the algorithm. READ MORE