Essays about: "L2 Regularization"
Showing result 1 - 5 of 12 essays containing the words L2 Regularization.
-
1. Elastic Net Regression for Prosthesis Control in Short Residual Limb Amputees: Performance and Generalizability
University essay from Lunds universitet/Avdelningen för Biomedicinsk teknikAbstract : This Master's thesis in Biomedical Engineering investigates the performance and generalizability of linear regression models in context of prosthesis control for short residual limb amputees. This thesis uses intramuscular electromyography data, and a regression and emplys a regression technique called Elastic Net Regression - a technique that combines L1 and L2-regularization - to predict 1-DOF isometric forces outputs from fingers and the wrist. READ MORE
-
2. Investigation of Facial Age Estimation using Deep Learning
University essay from Uppsala universitet/Institutionen för informationsteknologiAbstract : Age estimation from facial images has drawn increasing attention in the past fewyears. This thesis project performs the age group classification of facial imagesacquired in in-the-wild conditions using deep convolutional neural networkstechniques. READ MORE
-
3. Investigating Relations between Regularization and Weight Initialization in Artificial Neural Networks
University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisationAbstract : L2 regularization is a common method used to prevent overtraining in artificial neural networks. However, an issue with this method is that the regularization strength has to be properly adjusted for it to work as intended. This value is usually found by trial and error which can take some time, especially for larger networks. READ MORE
-
4. Optimizing L2-regularization for Binary Classification Tasks
University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisationAbstract : An Artificial Neural Network (ANN) is a type of machine learning algorithm with widespread usage. When training an ANN, there is a risk that it gets overtrained and cannot solve the task for new data. Methods to prevent this, such as L2-regularization, introduce hyperparameters that are time-consuming to optimize. READ MORE
-
5. Prediction of appropriate L2 regularization strengths through Bayesian formalism
University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation; Lunds universitet/Institutionen för astronomi och teoretisk fysik - Genomgår omorganisationAbstract : This paper proposes and investigates a Bayesian relation between optimal L2 regularization strengths and the number of training patterns and hidden nodes used for an artificial neural network. The results support the proposed dependence for number of training patterns, while the dependence on hidden architecture was less clear. READ MORE