Detection of local motion artifacts and image background in laser speckle contrast imaging

University essay from Linköpings universitet/Institutionen för medicinsk teknik

Abstract: Laser speckle contrast imaging (LSCI) and its extension, multi-exposure laser speckle contrast imaging (MELSCI) are non-invasive techniques to monitor peripheral blood perfusion. One of the main drawbacks of LSCI and MELSCI in clinical use is that the techniques are sensitive to tissue movement. Moreover, the image background contributes to unnecessary data. The aim of this project was to develop and evaluate different methods to detect local motion artifacts and image backgrounds in LSCI and MELSCI. In this project, three different methods were developed: one using statistical analysis and two using machine learning. The method based on classical statistics was developed in MATLAB with a dataset made up of 1797 frames of 256 x 320 images taken from a recording of a hand where the thumb and middle finger were taking turns making small movements while the middle finger was the subject of three different states made by an occlusion cuff (baseline, occlusion, and reperfusion). The main filter that was used in the first method was the Hampel filter. Furthermore, networks for the machine learning method were developed in Python using the same dataset but with 20,000 small patches extracted from the dataset of sizes 3 x 3 to 21 x 21 pixels. The first machine learning method was based on two-dimensional data patches, hence no time dimension was included, while the second machine learning method used three-dimensional data patches where the time dimension was included (from 1s to 10s). The generation of ground truth for the dataset was manually created frame by frame in a ground truth generation graphical user interface (GUI) in MATLAB. To assess the three methods, the Dice coefficient was used. The statistical method resulted in a Dice coefficient of 0.7557. The highest Dice coefficient for the machine learning method with a 2D dataset was 0.2902 (patch size 13 x 13) and the lowest was 0.2372 (patch size 7 x 7). For the machine learning method with 3D datasets, the patch size of 21 x 21 x 4 resulted in the highest Dice coefficient (0.5173), and the 21 x 21 x 40 model had the lowest Dice coefficient (0.1782). Since the two methods based on temporal data proved to be performing best in this project, one conclusion for further development of an improved model is the usage of temporal data in the training of a model. However, one important difference between the statistical method and the three-dimensional machine learning method is that the statistical method does not handle fast perfusion changes as well as the machine learning method and can not detect image background and static tissue. Therefore, the overall most useful method to further develop is the three-dimensional machine learning method.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)