Essays about: "Vision Transformers ViTs"

Showing result 1 - 5 of 6 essays containing the words Vision Transformers ViTs.

  1. 1. Classifying femur fractures using federated learning

    University essay from Linköpings universitet/Statistik och maskininlärning

    Author : Hong Zhang; [2024]
    Keywords : Atypical femur fracture; Federated Learning; Neural Network; Classification;

    Abstract : The rarity and subtle radiographic features of atypical femoral fractures (AFF) make it difficult to distinguish radiologically from normal femoral fractures (NFF). Compared with NFF, AFF has subtle radiological features and is associated with the long-term use of bisphosphonates for the treatment of osteoporosis. READ MORE

  2. 2. Using Machine Learning to Optimize Near-Earth Object Sighting Data at the Golden Ears Observatory

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Laura Murphy; [2023]
    Keywords : Near-Earth Object Detection; Machine Learning; Deep Learning; Visual Transformers;

    Abstract : This research project focuses on improving Near-Earth Object (NEO) detection using advanced machine learning techniques, particularly Vision Transformers (ViTs). The study addresses challenges such as noise, limited data, and class imbalance. READ MORE

  3. 3. Regularizing Vision-Transformers Using Gumbel-Softmax Distributions on Echocardiography Data

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Alfred Nilsson; [2023]
    Keywords : Deep Learning; Vision-Transformers; Echocardiography; Feature Selection; Gumbel-Softmax; Concrete Autoencoders; Regression; Djupinlärning; Vision-Transformers; Ekokardiografi; Feature Selection; GumbelSoftmax; Concrete Autoencoders; Regression;

    Abstract : This thesis introduces an novel approach to model regularization in Vision Transformers (ViTs), a category of deep learning models. It employs stochastic embedded feature selection within the context of echocardiography video analysis, specifically focusing on the EchoNet-Dynamic dataset. READ MORE

  4. 4. 3D Gaze Estimation on RGB Images using Vision Transformers

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Jing Li; [2023]
    Keywords : 3D Gaze Estimation; Vision Transformers ViTs ; Convolutional Neural Networks CNNs ; Multi-Head Attention; Red-Green-Blue RGB Images; 3D Blickriktningsestimering; Vision Transformers ViTs ; Konvolutionsneurala Nätverk CNNs ; Multi-Head Attention; Röd-Grön-Blå RGB Bilder;

    Abstract : Gaze estimation, a vital component in numerous applications such as humancomputer interaction, virtual reality, and driver monitoring systems, is the process of predicting the direction of an individual’s gaze. The predominant methods for gaze estimation can be broadly classified into intrusive and nonintrusive approaches. READ MORE

  5. 5. 3D Gaze Estimation on Near Infrared Images Using Vision Transformers

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Emil Emir Vardar; [2023]
    Keywords : Gaze estimation; Eye tracking; Vision Transformers ViTs ; Hybrid ViTs; Deep learning; Near-infrared NIR images; Ögonblicksuppskattning; Blickspårning; Vision Transformers ViTs ; Hybrida ViTs; Djupinlärning; Nära-infraröda NIR bilder;

    Abstract : Gaze estimation is the process of determining where a person is looking, which has recently become a popular research area due to its broad range of applications. For example, tools that estimate gaze are used for research, medical diagnosis, virtual and augmented reality, driver assistance system, and many more. READ MORE