Artificial Intelligence applications for railway signalling

University essay from KTH/Transportplanering

Abstract: The main purpose of this Master Thesis is to investigate how front-facing, train-mounted cameras and Computer Vision, a type of Artificial Intelligence (AI), can be used to compensate for GPS inaccuracies. By using footage from track-recording cameras, Computer Vision can be utilized to determine the number of tracks and the track occupancy of the train, which would compensate GPS inaccuracies in the lateral positioning. GPS usage in railway applications is rare, however, an AI-based positioning system would facilitate the usage of GPS for higher capacity and better utilization of current railway infrastructure. This is especially interesting for ERTMS, a European effort to create a standardized signalling system while simultaneously increasing capacity, where potential for an AI-based positioning system can be found in both ERTMS level 2 and level 3. Two Computer Vision models were created, based on two different methods. Images for both models were collected from YouTube videos of train trips recorded with train-mounted cameras. In the first model, the images were labelled according to the number unoccupied adjacent tracks. For example, a left-track occupancy of a double track section would be labelled “01”. The model architecture was based on Convolutional Neural Networks (CNN), a type of AI algorithm specifically developed for image processing, where every pixel in each image was analysed to find patterns corresponding to each label. In the second model, Python software was utilized to manually label every track with bounding boxes. The purpose of the bounding boxes was to demarcate the tracks within the images. Thus, the latter employed strategy did not require the labelling of both the number of tracks and their position. However, it was magnitudes more time consuming. The model was trained using YOLOv3 real-time object detection, a system perfectly fit for real-time track detection. The first model, which was limited to a recognition of up to four tracks, had a 60 % accuracy. The results were adequate considering the unfit method used to train the model and detect tracks. It was not further considered, as the discovery of the second method involving YOLOv3 resulted a more suitable model for the task. The second model was limited to a recognition of up to three tracks due to limited availability of processing power, computer memory, and time. The performance of the second model was evaluated using clips of different track scenarios.  In summary, the second model performed well in the following scenarios:  ·        Main-track detection in any environment.  ·        Side-track recognition in simple environments.  It performed mediocre in the following scenarios:  ·        Medium-illuminated tunnels.  ·        Tracks seen through windscreens obscured by water droplets.  ·        Side-track detection in complex environments.  It performed poorly in the following scenarios: ·        Low-illuminated tunnels.  ·        Bright tunnel exits.  ·        Side-track detection in snowy conditions. In conclusion, it is possible to create a computer-vision model for track recognition. Although the results presented in this thesis are promising in certain scenarios, the image dataset is far too limited. Only approximately 350 labelled images were available for the model training. To develop a full-scale AI-based positioning system, many more images must be used to fully encapsule all the possible track scenarios. Furthermore, numerous technical specifications must be defined for the development of such a large-scale system, such as camera type (normal, thermal, event-based, lidar etc.), system design, safety analysis, system evaluation strategy etc. Nevertheless, if the development of an AI-based positioning system is successful, it can transition to become a future full-scale railway system of autonomous freight, passenger, and shunting operations.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)