Essays about: "Regularisering"

Showing result 6 - 10 of 30 essays containing the word Regularisering.

  1. 6. Clustering of Unevenly Spaced Mixed Data Time Series

    University essay from KTH/Matematisk statistik

    Author : Pierre Sinander; Asik Ahmed; [2023]
    Keywords : mixed data time series; unevenly spaced time series; clustering; dynamic time warping; Gower dissimilarity; time warping regularisation; numeriska och kategoriska tidsserier; ojämnt fördelade tidsserier; kluster analys; dynamic time warping; Gower dissimilaritet; regularisering av tidsförvränging;

    Abstract : This thesis explores the feasibility of clustering mixed data and unevenly spaced time series for customer segmentation. The proposed method implements the Gower dissimilarity as the local distance function in dynamic time warping to calculate dissimilarities between mixed data time series. READ MORE

  2. 7. Machine Learning to predict student performance based on well-being data : a technical and ethical discussion

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Lucy McCarren; [2023]
    Keywords : Machine Learning; Data Science; Learning Analytics; Maskininlärning; Data Science; Inlärningsanalys;

    Abstract : The data provided by educational platforms and digital tools offers new ways of analysing students’ learning strategies. One such digital tool is the wellbeing platform created by EdAider, which consists of an interface where students can answer questions about their well-being, and a dashboard where teachers and schools can see insights into the well-being of individual students and groups of students. READ MORE

  3. 8. Modelling Risk in Real-Life Multi-Asset Portfolios

    University essay from KTH/Matematik (Avd.)

    Author : Karin Hahn; Axel Backlund; [2023]
    Keywords : Risk modelling; multi-asset portfolios; risk factor models; time series analysis; regression; Riskmodellering; finansiella portföljer; riskfaktormodeller; tidsserieanalys; regression;

    Abstract : We develop a risk factor model based on data from a large number of portfolios spanning multiple asset classes. The risk factors are selected based on economic theory through an analysis of the asset holdings, as well as statistical tests. READ MORE

  4. 9. Investigating Relations between Regularization and Weight Initialization in Artificial Neural Networks

    University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

    Author : Rasmus Sjöö; [2022]
    Keywords : Artificial Neural Networks; L1 Regularization; L2 Regularization; Loss Function; Maximum Likelihood; Regularization Strength Synthetic Data Generation; Weight Initialization; Physics and Astronomy;

    Abstract : L2 regularization is a common method used to prevent overtraining in artificial neural networks. However, an issue with this method is that the regularization strength has to be properly adjusted for it to work as intended. This value is usually found by trial and error which can take some time, especially for larger networks. READ MORE

  5. 10. Towards topology-aware Variational Auto-Encoders : from InvMap-VAE to Witness Simplicial VAE

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Aniss Aiman Medbouhi; [2022]
    Keywords : Variational Auto-Encoder; Nonlinear dimensionality reduction; Generative model; Inverse projection; Computational topology; Algorithmic topology; Topological Data Analysis; Data visualisation; Unsupervised representation learning; Topological machine learning; Betti number; Simplicial complex; Witness complex; Simplicial map; Simplicial regularization.; Variations autokodare; Ickelinjär dimensionalitetsreducering; Generativ modell; Invers projektion; Beräkningstopologi; Algoritmisk topologi; Topologisk Data Analys; Datavisualisering; Oövervakat representationsinlärning; Topologisk maskininlärning; Betti-nummer; Simplicielt komplex; Vittneskomplex; Simpliciel avbildning; Simpliciel regularisering.;

    Abstract : Variational Auto-Encoders (VAEs) are one of the most famous deep generative models. After showing that standard VAEs may not preserve the topology, that is the shape of the data, between the input and the latent space, we tried to modify them so that the topology is preserved. READ MORE