Track Talk

Testing AI Systems: Creating Awareness

Peter Collewijn

Hannie van Kooten

10:30-11:00 CEST Wednesday 29th September

The role of artificial intelligence (AI) in our daily lives is growing, especially in the field of machine learning. Not only because of the vast amounts of available data, but also the cheap computing power has contributed to the increased use. The technology is currently within reach for anyone interested and the number of applications is expected to grow even further in the coming years. However, progress in testing AI systems is still lagging behind. Eventually, this becomes a problem as independent testing is a necessity for the trustworthiness and acceptance of AI systems in society.

The behaviour of AI systems is different from rule-based systems. Business rules and specifications are not readily available, which causes the oracle problem. This makes it difficult to verify whether the test results are correct or not. The strong dependency on data and a probability as an outcome makes testing AI systems far more complex. Therefore, our current standards in test approaches are not sufficient to test AI systems.

This presentation summarizes the whitepaper “Testing AI Systems” of the working group “Testing and AI” of the Dutch TestNet association. This group has the ambition to create best practices in regards to testing AI systems. They provide the necessary information in white papers and presentations so that fellow testers can identify the risks of implementing AI systems and address the associated AI testing process.

This whitepaper has been written after studying papers, reading articles and taking into account our own experiences with testing AI systems. However, we are not only after creating awareness within our own TestNet community. We want to involve the international testing community in this research so that we can share experiences and overcome future challenges related to testing AI systems.

The whitepaper is due to be published in the Summer of 2021.