Ranking Software Reliability Measures using Best Worst Multicriteria DecisionMaking Method

University essay from Blekinge Tekniska Högskola/Institutionen för programvaruteknik

Abstract: Background: The production of high-quality software products has been a long-standing challenge for the software industry. The advent of agile software development methodologies has resulted in a significant acceleration in the development pace. This increased pace has, in turn, necessitated an equally agile-paced verification and validation process to ensure the quality of the software. The use of software metrics has become an integral part of this verification and validation process and plays a crucial role in evaluating software quality. Hence, organizations must have a robust set of metrics to perform cost-effective and reliable quality assessments. The constant strive for improvement has led organizations to seek ways to enhance their quality assessment process, often by improving their set of software metrics. However, not all available software metrics provide desirable results, making it imperative to select the measures that bring value to the organization. The process of ranking and selecting software metrics is carried out through the use of multi-criteria decision-making algorithms. This background serves as the basis for this academic research. Objectives: The current study aims at ranking software reliability measures that have been published in academic literature based on metric validation criteria within an industrial context. Methods: The current research is centered on Ericsson’s development Environment and a case study is chosen as the research methodology. We have made use of a new multi-criteria decision-making technique known as the "Best method Worst method" to rank the software reliability measures and this technique was deployed using an online questionnaire. Results: This empirical investigation revealed that "mean time to failure", "mean downtime", and "defect rate" was considered the most important software reliability metrics based on their validation criteria. The best criteria for comparing these measures were found to be "Actionability" and "predictability", while "Non-exploitability" and "Maturity" were identified as the least useful criteria. However, the practical application of the BWMCDM technique for evaluating these measures was found to be complex and time-consuming in real-world scenarios. Conclusions: The present study has significant practical and academic implications in the field of software reliability measures. From an industrial perspective, the study provides a valuable framework that can be used for ranking software measures, for organizations and individuals involved in software development and management. By using the Best-worst multi-criteria decision-making method to rank software reliability measures, the study offers a practical and efficient way to assess the importance of software metrics and determine the best-suited measures for specific organizational needs. The majority of the participants in the study found the ranking framework to be useful, easy to learn and adapt, and with potential for real-world applications.  From an academic research context, the study highlights the importance of multi-criteria decision-making techniques in ranking software reliability measures and the advantages of using the BWM method over the conventional AHP method. The study also provides a comprehensive list of seven validation criteria for assessing software metrics, which can serve as a basis for future research and development in this field. In summary, the study’s results have practical and theoretical implications and contribute to advancing the field of software reliability measures.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)