Essays about: "multimodal fusion"
Showing result 6 - 10 of 17 essays containing the words multimodal fusion.
-
6. Land Use/Land Cover Classification From Satellite Remote Sensing Images Over Urban Areas in Sweden : An Investigative Multiclass, Multimodal and Spectral Transformation, Deep Learning Semantic Image Segmentation Study
University essay from Linköpings universitet/Institutionen för datavetenskapAbstract : Remote Sensing (RS) technology provides valuable information about Earth by enabling an overview of the planet from above, making it a much-needed resource for many applications. Given the abundance of RS data and continued urbanisation, there is a need for efficient approaches to leverage RS data and its unique characteristics for the assessment and management of urban areas. READ MORE
-
7. Hierarchical Fusion Approaches for Enhancing Multimodal Emotion Recognition in Dialogue-Based Systems : A Systematic Study of Multimodal Emotion Recognition Fusion Strategy
University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)Abstract : Multimodal Emotion Recognition (MER) has gained increasing attention due to its exceptional performance. In this thesis, we evaluate feature-level fusion, decision-level fusion, and two proposed hierarchical fusion methods for MER systems using a dialogue-based dataset. READ MORE
-
8. Multimodal Machine Learning in Human Motion Analysis
University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)Abstract : Currently, most long-term human motion classification and prediction tasks are driven by spatio-temporal data of the human trunk. In addition, data with multiple modalities can change idiosyncratically with human motion, such as electromyography (EMG) of specific muscles and respiratory rhythm. READ MORE
-
9. Explainable Multimodal Fusion
University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)Abstract : Recently, there has been a lot of interest in explainable predictions, with new explainability approaches being created for specific data modalities like images and text. However, there is a dearth of understanding and minimal exploration in terms of explainability in the multimodal machine learning domain, where diverse data modalities are fused together in the model. READ MORE
-
10. Context-based Multimodal Machine Learning on Game Oriented Data for Affective State Recognition
University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)Abstract : Affective computing is an essential part of Human-Robot Interaction, where knowing the human’s emotional state is crucial to create an interactive and adaptive social robot. Previous work has mainly been focusing on using unimodal or multimodal sequential models for Affective State Recognition. READ MORE