A MODEL-INDEPENDENT METHODOLOGY FOR A ROOT CAUSE ANALYSIS SYSTEM : A STUDY INVESTIGATING INTERPRETABLE MACHINE LEARNING METHODS
Abstract: Today, companies like Volvo GTO experience a vast increase in data and the ability toprocess it. This makes it possible to utilize machine learning models to construct a rootcause analysis system in order to predict, explain and prevent defects. However, thereexists a trade-off between model performance and explanation capability, both of whichare essential to such system.This thesis aims to, with the use of machine learning models, inspect the relationshipbetween sensor data from the painting process and the texture defectorange peel. Theaim is also to evaluate the consistency of different explanation methods.After the data was preprocessed, and new features were engineered, e.g. adjustments,three machine learning models were trained and tested. In order to explain a linearmodel, one can use its coefficients. In the case of a tree-based model, MDI is a commonglobal explanation method. SHAP is a state-of-the-art model-independent method thatcan explain a model globally and locally. These three methods were compared in orderto evaluate the consistency of their explanations. If SHAP would be consistent with theothers on a global level, it can be argued that SHAP can be used locally in an root causeanalysis.The study showed that the coefficients and MDI were consistent with SHAP as theoverall correlation between them were high and because they tended to weight thefeatures in a similar way. From this conclusion, a root cause analysis algorithm wasdeveloped with SHAP as a local explanation method. Finally, it cannot be concludedthat there is a relationship between the sensor data andorange peel, as the adjustments ofthe process were the most impactful features.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)