Utilizing Input Simulation for Video Game Test Automation : A Case Study

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: In typical software projects, it is common that half of the development time and cost is spent on testing the software. Software Test Automation is an area that has been rapidly expanding in recent years because of its capacity to test features rapidly and efficiently, but in the Video Game industry, this concept is still in its infancy and common practices are still being developed. Having the Automated Tests be as close as possible to the user's experience is desirable to ensure a resilient and bug-free interactive experience. Having the ability to properly automate a test in such a way could potentially save manual Quality Assurance (QA) analysts and developers a lot of time in finding and reporting bugs early on, which in turn can help companies save both time and resources. This thesis explores the use of Input Simulation, e.g., simulating keyboard and mouse inputs, in the context of Video Game Test Automation. For the design and implementation of the Input Simulation framework, Agile Scrum and Human Centered Design Methodologies were followed. Exploratory interviews were conducted with 2 test automation engineers at Fatshark, desk research was used to explore existing tools and their benefits, and Proof of Concepts (POC) were created with tools selected from the Desk Research on two of the company’s games, Vermintide 2 and Darktide. From this, a new framework named TestifyInput, best fitting the needs of the company, was created and implemented as a part of Fatshark’s in-house test automation framework called Testify, on both the Engine and Gameplay side, written in C++ and Lua respectively. TestifyInput was then evaluated through an Automated Test that was implemented to test weapon interactions in the game, a User Observation with 5 QA testers completing 2 tasks, and a Questionnaire sent out to 7 QA testers. By using metrics such as defect detection, speed, and limitations, TestifyInput was evaluated in the existing Test Automation context and against its human counterparts. The evaluation results showed that Input Simulation allowed for better test coverage and allowed to test close to the actual User Experience. TestifyInput itself remains relatively easy to implement and to use, and the test case written with it remained stable, not needing any modification despite changing gameplay code. However, to fully compare the capabilities of a traditional test case with one employing Input Simulation, further research is needed.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)