Mimic human problem-solving behaviour
The rapid development of new and intricate software in the ever-growing supply of digital products brings challenges to properly and thoroughly test it. The EVI project seeks to address this challenge by creating machine learning algorithms for automatic software testing, inspired by how people find mistakes. Frits Vaandrager: “People often learn how a new device works by playing with it – such as pressing buttons on a new camera – and then discover design errors, such as a user interface issue. Our algorithms learn how a device works while testing it, and are constantly alert to software bugs.”
Black-box testing
The EVI project tackles fundamental research questions in black-box testing of reactive systems, focusing on the interface behavior without examining internal workings. Unlike traditional model-based testing, which requires specialized expertise to create specification models, black-box checking uses an active model learning algorithm to automatically construct detailed hypothesis models based on test outcomes. This reduces the need for human modeling expertise and allows testers to provide high-level requirements. The system then verifies if the learned model meets these requirements, improving automation and effectiveness in detecting software bugs, including those often missed by other tools.
Collaboration
The project is a collaborative effort between Radboud University and the University of Twente, spanning five years. Frits Vaandrager brings his extensive expertise in algorithms for model learning, particularly in learning algorithms for extended finite state machines and I/O automata. His work has led to significant discoveries, such as identifying standard violations in network protocols and legacy software.
Co-applicant Petra van den Bos from the University of Twente complements the project with her specialisation in model-based testing and test generation algorithms. Her expertise includes systems with data and nondeterminism, and test automation in behavior-driven development.
Furthermore, the grant enables the appointment of two PhD students, a programmer and several student-assistants to carry out research and outreach.
Societal impact
This groundbreaking research promises to revolutionize automatic software testing, enabling software engineers to detect more bugs faster and more efficiently. The NWO-ENW M grant recognizes the potential impact of this work on the industry and the broader field of software engineering. Vaandrager: “The crucial importance of software testing was illustrated earlier this month when CrowdStrike, a cybersecurity company, sent out a flawed software update that took down 8.5 million Windows machines. Costs from this global outage could easily top $1 billion. In a preliminary Post Incident Review (PIR) published this week, Crowdstrike blames a bug in its test software for not properly validating the buggy update.”