Essays about: "embedding models"

Showing result 16 - 20 of 104 essays containing the words embedding models.

  1. 16. Distilling Multilingual Transformer Models for Efficient Document Retrieval : Distilling multi-Transformer models with distillation losses involving multi-Transformer interactions

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Xuecong Liu; [2022]
    Keywords : Dense Passage Retrieval; Knowledge Distillation; Multilingual Transformer; Document Retrieval; Open Domain Question Answering; Tät textavsnittssökning; kunskapsdestillering; flerspråkiga transformatorer; dokumentsökning; domänlöst frågebesvarande;

    Abstract : Open Domain Question Answering (OpenQA) is a task concerning automatically finding answers to a query from a given set of documents. Language-agnostic OpenQA is an increasingly important research area in the globalised world, where the answers can be in a different language from the question. READ MORE

  2. 17. Integrating Fire Evacuation into the Building Information Modelling Workflow

    University essay from Lunds universitet/Avdelningen för Brandteknik

    Author : Nazim Yakhou; [2022]
    Keywords : Fire Safety Engineering; Building Information Modelling; Fire Evacuation; Performance Based Design; Golden Thread of Information; Technology and Engineering;

    Abstract : Building Information Modelling (BIM) is arising gradually as a useful methodology in the AEC field. One of the many benefits of BIM is coordination between stakeholders from multiple disciplines. However, the field of Fire Safety Engineering (FSE) is relatively lagging by its lack of integration into this digital workflow. READ MORE

  3. 18. On the use of knowledge graph embeddings for business expansion

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Niklas Rydberg; [2022]
    Keywords : Knowledge Graph Embeddings; Knowledge Graphs; Link Prediction; Machine Learning; Artificial Intelligence; Kunskapsgrafinbäddningar; Kunskapsgrafer; Länkförutsägelser; Maskininlärning; Artificiell Intelligens;

    Abstract : The area of Knowledge Graphs has grown significantly during recent time and has found many different applications both in industrial and academic settings. Despite this, many large Knowledge Graphs are in fact incomplete, which leads to the problem of finding the missing facts in the graphs using Link Prediction. READ MORE

  4. 19. Structural Comparison of Data Representations Obtained from Deep Learning Models

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Tommy Wallin; [2022]
    Keywords : Representation Learning; Deep learning models; Image Representations.; Representationsinlärning; Djupinlärningsmodeller; Bildrepresentationer;

    Abstract : In representation learning we are interested in how data is represented by different models. Representations from different models are often compared by training a new model on a downstream task using the representations and testing their performance. READ MORE

  5. 20. Towards topology-aware Variational Auto-Encoders : from InvMap-VAE to Witness Simplicial VAE

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Aniss Aiman Medbouhi; [2022]
    Keywords : Variational Auto-Encoder; Nonlinear dimensionality reduction; Generative model; Inverse projection; Computational topology; Algorithmic topology; Topological Data Analysis; Data visualisation; Unsupervised representation learning; Topological machine learning; Betti number; Simplicial complex; Witness complex; Simplicial map; Simplicial regularization.; Variations autokodare; Ickelinjär dimensionalitetsreducering; Generativ modell; Invers projektion; Beräkningstopologi; Algoritmisk topologi; Topologisk Data Analys; Datavisualisering; Oövervakat representationsinlärning; Topologisk maskininlärning; Betti-nummer; Simplicielt komplex; Vittneskomplex; Simpliciel avbildning; Simpliciel regularisering.;

    Abstract : Variational Auto-Encoders (VAEs) are one of the most famous deep generative models. After showing that standard VAEs may not preserve the topology, that is the shape of the data, between the input and the latent space, we tried to modify them so that the topology is preserved. READ MORE